2023-05-22 16:55:26,348 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20 2023-05-22 16:55:26,366 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.regionserver.wal.TestLogRolling timeout: 13 mins 2023-05-22 16:55:26,399 INFO [Time-limited test] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=10, OpenFileDescriptor=264, MaxFileDescriptor=60000, SystemLoadAverage=211, ProcessCount=169, AvailableMemoryMB=6603 2023-05-22 16:55:26,406 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-22 16:55:26,406 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/cluster_0d2d255a-1426-ce35-eedb-4562ca2c4eca, deleteOnExit=true 2023-05-22 16:55:26,407 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-22 16:55:26,407 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/test.cache.data in system properties and HBase conf 2023-05-22 16:55:26,408 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/hadoop.tmp.dir in system properties and HBase conf 2023-05-22 16:55:26,409 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/hadoop.log.dir in system properties and HBase conf 2023-05-22 16:55:26,409 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-22 16:55:26,410 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-22 16:55:26,410 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-22 16:55:26,528 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-05-22 16:55:26,925 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-22 16:55:26,931 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-22 16:55:26,931 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-22 16:55:26,931 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-22 16:55:26,931 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-22 16:55:26,932 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-22 16:55:26,932 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-22 16:55:26,932 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-22 16:55:26,933 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-22 16:55:26,933 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-22 16:55:26,933 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/nfs.dump.dir in system properties and HBase conf 2023-05-22 16:55:26,933 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/java.io.tmpdir in system properties and HBase conf 2023-05-22 16:55:26,933 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-22 16:55:26,934 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-22 16:55:26,934 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-22 16:55:27,384 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-22 16:55:27,399 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-22 16:55:27,403 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-22 16:55:27,688 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-05-22 16:55:27,843 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-05-22 16:55:27,857 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 16:55:27,893 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 16:55:27,954 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/java.io.tmpdir/Jetty_localhost_37089_hdfs____.hl1udl/webapp 2023-05-22 16:55:28,075 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37089 2023-05-22 16:55:28,082 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-22 16:55:28,085 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-22 16:55:28,086 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-22 16:55:28,508 WARN [Listener at localhost/37047] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 16:55:28,579 WARN [Listener at localhost/37047] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-22 16:55:28,601 WARN [Listener at localhost/37047] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 16:55:28,607 INFO [Listener at localhost/37047] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 16:55:28,611 INFO [Listener at localhost/37047] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/java.io.tmpdir/Jetty_localhost_35139_datanode____.o8z2lg/webapp 2023-05-22 16:55:28,710 INFO [Listener at localhost/37047] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35139 2023-05-22 16:55:29,014 WARN [Listener at localhost/43579] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 16:55:29,025 WARN [Listener at localhost/43579] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-22 16:55:29,030 WARN [Listener at localhost/43579] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 16:55:29,031 INFO [Listener at localhost/43579] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 16:55:29,036 INFO [Listener at localhost/43579] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/java.io.tmpdir/Jetty_localhost_35197_datanode____6bj0lw/webapp 2023-05-22 16:55:29,132 INFO [Listener at localhost/43579] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35197 2023-05-22 16:55:29,142 WARN [Listener at localhost/45465] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 16:55:29,471 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf6f5d998c08813f5: Processing first storage report for DS-69b2633b-f4cb-4531-9587-2e0b081ec070 from datanode 8f78caec-345f-47eb-bc9a-fe13d72c8b6a 2023-05-22 16:55:29,473 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf6f5d998c08813f5: from storage DS-69b2633b-f4cb-4531-9587-2e0b081ec070 node DatanodeRegistration(127.0.0.1:34015, datanodeUuid=8f78caec-345f-47eb-bc9a-fe13d72c8b6a, infoPort=37291, infoSecurePort=0, ipcPort=43579, storageInfo=lv=-57;cid=testClusterID;nsid=1344593953;c=1684774527478), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-05-22 16:55:29,473 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xae54acd654803f8a: Processing first storage report for DS-04b25b46-8162-44e7-bc73-f17122f87c99 from datanode a40152c2-af2c-4a62-914e-71ae7417cdb8 2023-05-22 16:55:29,473 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xae54acd654803f8a: from storage DS-04b25b46-8162-44e7-bc73-f17122f87c99 node DatanodeRegistration(127.0.0.1:37373, datanodeUuid=a40152c2-af2c-4a62-914e-71ae7417cdb8, infoPort=40899, infoSecurePort=0, ipcPort=45465, storageInfo=lv=-57;cid=testClusterID;nsid=1344593953;c=1684774527478), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:55:29,473 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf6f5d998c08813f5: Processing first storage report for DS-5d070575-8e86-4d5e-8a94-7fa5c8e99a64 from datanode 8f78caec-345f-47eb-bc9a-fe13d72c8b6a 2023-05-22 16:55:29,473 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf6f5d998c08813f5: from storage DS-5d070575-8e86-4d5e-8a94-7fa5c8e99a64 node DatanodeRegistration(127.0.0.1:34015, datanodeUuid=8f78caec-345f-47eb-bc9a-fe13d72c8b6a, infoPort=37291, infoSecurePort=0, ipcPort=43579, storageInfo=lv=-57;cid=testClusterID;nsid=1344593953;c=1684774527478), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:55:29,473 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xae54acd654803f8a: Processing first storage report for DS-c6f866fd-1115-4eda-bb0d-e2af2d15d826 from datanode a40152c2-af2c-4a62-914e-71ae7417cdb8 2023-05-22 16:55:29,474 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xae54acd654803f8a: from storage DS-c6f866fd-1115-4eda-bb0d-e2af2d15d826 node DatanodeRegistration(127.0.0.1:37373, datanodeUuid=a40152c2-af2c-4a62-914e-71ae7417cdb8, infoPort=40899, infoSecurePort=0, ipcPort=45465, storageInfo=lv=-57;cid=testClusterID;nsid=1344593953;c=1684774527478), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:55:29,553 DEBUG [Listener at localhost/45465] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20 2023-05-22 16:55:29,615 INFO [Listener at localhost/45465] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/cluster_0d2d255a-1426-ce35-eedb-4562ca2c4eca/zookeeper_0, clientPort=61798, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/cluster_0d2d255a-1426-ce35-eedb-4562ca2c4eca/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/cluster_0d2d255a-1426-ce35-eedb-4562ca2c4eca/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-22 16:55:29,628 INFO [Listener at localhost/45465] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=61798 2023-05-22 16:55:29,636 INFO [Listener at localhost/45465] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:55:29,638 INFO [Listener at localhost/45465] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:55:30,297 INFO [Listener at localhost/45465] util.FSUtils(471): Created version file at hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53 with version=8 2023-05-22 16:55:30,297 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/hbase-staging 2023-05-22 16:55:30,593 INFO [Listener at localhost/45465] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-05-22 16:55:31,077 INFO [Listener at localhost/45465] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-22 16:55:31,109 INFO [Listener at localhost/45465] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 16:55:31,110 INFO [Listener at localhost/45465] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-22 16:55:31,110 INFO [Listener at localhost/45465] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-22 16:55:31,110 INFO [Listener at localhost/45465] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 16:55:31,110 INFO [Listener at localhost/45465] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-22 16:55:31,250 INFO [Listener at localhost/45465] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-22 16:55:31,328 DEBUG [Listener at localhost/45465] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-05-22 16:55:31,422 INFO [Listener at localhost/45465] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43169 2023-05-22 16:55:31,432 INFO [Listener at localhost/45465] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:55:31,434 INFO [Listener at localhost/45465] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:55:31,455 INFO [Listener at localhost/45465] zookeeper.RecoverableZooKeeper(93): Process identifier=master:43169 connecting to ZooKeeper ensemble=127.0.0.1:61798 2023-05-22 16:55:31,495 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:431690x0, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-22 16:55:31,497 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:43169-0x10053d1ad430000 connected 2023-05-22 16:55:31,523 DEBUG [Listener at localhost/45465] zookeeper.ZKUtil(164): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-22 16:55:31,523 DEBUG [Listener at localhost/45465] zookeeper.ZKUtil(164): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 16:55:31,527 DEBUG [Listener at localhost/45465] zookeeper.ZKUtil(164): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-22 16:55:31,534 DEBUG [Listener at localhost/45465] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43169 2023-05-22 16:55:31,535 DEBUG [Listener at localhost/45465] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43169 2023-05-22 16:55:31,535 DEBUG [Listener at localhost/45465] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43169 2023-05-22 16:55:31,535 DEBUG [Listener at localhost/45465] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43169 2023-05-22 16:55:31,536 DEBUG [Listener at localhost/45465] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43169 2023-05-22 16:55:31,542 INFO [Listener at localhost/45465] master.HMaster(444): hbase.rootdir=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53, hbase.cluster.distributed=false 2023-05-22 16:55:31,609 INFO [Listener at localhost/45465] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-22 16:55:31,609 INFO [Listener at localhost/45465] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 16:55:31,610 INFO [Listener at localhost/45465] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-22 16:55:31,610 INFO [Listener at localhost/45465] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-22 16:55:31,610 INFO [Listener at localhost/45465] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 16:55:31,610 INFO [Listener at localhost/45465] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-22 16:55:31,615 INFO [Listener at localhost/45465] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-22 16:55:31,618 INFO [Listener at localhost/45465] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46427 2023-05-22 16:55:31,620 INFO [Listener at localhost/45465] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-22 16:55:31,626 DEBUG [Listener at localhost/45465] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-22 16:55:31,627 INFO [Listener at localhost/45465] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:55:31,630 INFO [Listener at localhost/45465] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:55:31,631 INFO [Listener at localhost/45465] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46427 connecting to ZooKeeper ensemble=127.0.0.1:61798 2023-05-22 16:55:31,635 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): regionserver:464270x0, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-22 16:55:31,636 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46427-0x10053d1ad430001 connected 2023-05-22 16:55:31,636 DEBUG [Listener at localhost/45465] zookeeper.ZKUtil(164): regionserver:46427-0x10053d1ad430001, quorum=127.0.0.1:61798, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-22 16:55:31,638 DEBUG [Listener at localhost/45465] zookeeper.ZKUtil(164): regionserver:46427-0x10053d1ad430001, quorum=127.0.0.1:61798, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 16:55:31,638 DEBUG [Listener at localhost/45465] zookeeper.ZKUtil(164): regionserver:46427-0x10053d1ad430001, quorum=127.0.0.1:61798, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-22 16:55:31,639 DEBUG [Listener at localhost/45465] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46427 2023-05-22 16:55:31,639 DEBUG [Listener at localhost/45465] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46427 2023-05-22 16:55:31,643 DEBUG [Listener at localhost/45465] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46427 2023-05-22 16:55:31,643 DEBUG [Listener at localhost/45465] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46427 2023-05-22 16:55:31,643 DEBUG [Listener at localhost/45465] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46427 2023-05-22 16:55:31,645 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,43169,1684774530436 2023-05-22 16:55:31,654 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-22 16:55:31,656 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,43169,1684774530436 2023-05-22 16:55:31,683 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-22 16:55:31,683 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): regionserver:46427-0x10053d1ad430001, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-22 16:55:31,683 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:55:31,684 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-22 16:55:31,686 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,43169,1684774530436 from backup master directory 2023-05-22 16:55:31,686 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-22 16:55:31,689 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,43169,1684774530436 2023-05-22 16:55:31,689 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-22 16:55:31,690 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-22 16:55:31,690 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,43169,1684774530436 2023-05-22 16:55:31,692 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-05-22 16:55:31,694 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-05-22 16:55:31,780 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/hbase.id with ID: 7ec92188-d07e-4e03-ad71-c79337e8cfb7 2023-05-22 16:55:31,822 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:55:31,838 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:55:31,882 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x6084d118 to 127.0.0.1:61798 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-22 16:55:31,917 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5cabff18, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-22 16:55:31,942 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-22 16:55:31,944 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-22 16:55:31,954 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-22 16:55:31,989 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/MasterData/data/master/store-tmp 2023-05-22 16:55:32,023 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:55:32,023 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-22 16:55:32,023 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:55:32,023 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:55:32,023 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-22 16:55:32,023 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:55:32,023 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:55:32,024 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-22 16:55:32,025 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/MasterData/WALs/jenkins-hbase4.apache.org,43169,1684774530436 2023-05-22 16:55:32,044 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43169%2C1684774530436, suffix=, logDir=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/MasterData/WALs/jenkins-hbase4.apache.org,43169,1684774530436, archiveDir=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/MasterData/oldWALs, maxLogs=10 2023-05-22 16:55:32,063 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.CommonFSUtils$DfsBuilderUtility(753): Could not find replicate method on builder; will not set replicate when creating output stream java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DistributedFileSystem$HdfsDataOutputStreamBuilder.replicate() at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.util.CommonFSUtils$DfsBuilderUtility.(CommonFSUtils.java:750) at org.apache.hadoop.hbase.util.CommonFSUtils.createForWal(CommonFSUtils.java:802) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.initOutput(ProtobufLogWriter.java:102) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:160) at org.apache.hadoop.hbase.wal.FSHLogProvider.createWriter(FSHLogProvider.java:78) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:307) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:881) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:574) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:515) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62) at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:295) at org.apache.hadoop.hbase.master.region.MasterRegion.createWAL(MasterRegion.java:200) at org.apache.hadoop.hbase.master.region.MasterRegion.bootstrap(MasterRegion.java:220) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:348) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:55:32,087 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/MasterData/WALs/jenkins-hbase4.apache.org,43169,1684774530436/jenkins-hbase4.apache.org%2C43169%2C1684774530436.1684774532061 2023-05-22 16:55:32,087 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37373,DS-04b25b46-8162-44e7-bc73-f17122f87c99,DISK], DatanodeInfoWithStorage[127.0.0.1:34015,DS-69b2633b-f4cb-4531-9587-2e0b081ec070,DISK]] 2023-05-22 16:55:32,088 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-22 16:55:32,088 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:55:32,091 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:55:32,092 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:55:32,146 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:55:32,154 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-22 16:55:32,179 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-22 16:55:32,192 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:55:32,199 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:55:32,201 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:55:32,219 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:55:32,224 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-22 16:55:32,225 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=881536, jitterRate=0.12093217670917511}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-22 16:55:32,226 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-22 16:55:32,227 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-22 16:55:32,253 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-22 16:55:32,253 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-22 16:55:32,257 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-22 16:55:32,259 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-05-22 16:55:32,299 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 39 msec 2023-05-22 16:55:32,300 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(95): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-22 16:55:32,326 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-22 16:55:32,332 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-22 16:55:32,359 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-22 16:55:32,362 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-22 16:55:32,365 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-22 16:55:32,370 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-22 16:55:32,375 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-22 16:55:32,378 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:55:32,380 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-22 16:55:32,381 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-22 16:55:32,397 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-22 16:55:32,403 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-22 16:55:32,403 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): regionserver:46427-0x10053d1ad430001, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-22 16:55:32,403 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:55:32,404 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,43169,1684774530436, sessionid=0x10053d1ad430000, setting cluster-up flag (Was=false) 2023-05-22 16:55:32,419 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:55:32,426 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-22 16:55:32,427 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,43169,1684774530436 2023-05-22 16:55:32,433 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:55:32,438 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-22 16:55:32,439 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,43169,1684774530436 2023-05-22 16:55:32,442 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/.hbase-snapshot/.tmp 2023-05-22 16:55:32,448 INFO [RS:0;jenkins-hbase4:46427] regionserver.HRegionServer(951): ClusterId : 7ec92188-d07e-4e03-ad71-c79337e8cfb7 2023-05-22 16:55:32,452 DEBUG [RS:0;jenkins-hbase4:46427] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-22 16:55:32,461 DEBUG [RS:0;jenkins-hbase4:46427] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-22 16:55:32,461 DEBUG [RS:0;jenkins-hbase4:46427] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-22 16:55:32,464 DEBUG [RS:0;jenkins-hbase4:46427] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-22 16:55:32,465 DEBUG [RS:0;jenkins-hbase4:46427] zookeeper.ReadOnlyZKClient(139): Connect 0x3c23f8af to 127.0.0.1:61798 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-22 16:55:32,469 DEBUG [RS:0;jenkins-hbase4:46427] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@32e4115b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-22 16:55:32,470 DEBUG [RS:0;jenkins-hbase4:46427] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@36db93df, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-22 16:55:32,492 DEBUG [RS:0;jenkins-hbase4:46427] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:46427 2023-05-22 16:55:32,496 INFO [RS:0;jenkins-hbase4:46427] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-22 16:55:32,496 INFO [RS:0;jenkins-hbase4:46427] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-22 16:55:32,496 DEBUG [RS:0;jenkins-hbase4:46427] regionserver.HRegionServer(1022): About to register with Master. 2023-05-22 16:55:32,499 INFO [RS:0;jenkins-hbase4:46427] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,43169,1684774530436 with isa=jenkins-hbase4.apache.org/172.31.14.131:46427, startcode=1684774531608 2023-05-22 16:55:32,516 DEBUG [RS:0;jenkins-hbase4:46427] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-22 16:55:32,574 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-22 16:55:32,584 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-22 16:55:32,584 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-22 16:55:32,584 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-22 16:55:32,584 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-22 16:55:32,584 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-22 16:55:32,584 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:55:32,584 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-22 16:55:32,584 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:55:32,590 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1684774562590 2023-05-22 16:55:32,593 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-22 16:55:32,593 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-22 16:55:32,594 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-22 16:55:32,599 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-22 16:55:32,607 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-22 16:55:32,615 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-22 16:55:32,616 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-22 16:55:32,616 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-22 16:55:32,617 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-22 16:55:32,618 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-22 16:55:32,620 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-22 16:55:32,622 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-22 16:55:32,623 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-22 16:55:32,627 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-22 16:55:32,628 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-22 16:55:32,635 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1684774532635,5,FailOnTimeoutGroup] 2023-05-22 16:55:32,640 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1684774532636,5,FailOnTimeoutGroup] 2023-05-22 16:55:32,640 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-22 16:55:32,641 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-22 16:55:32,642 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-22 16:55:32,643 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-22 16:55:32,645 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-22 16:55:32,647 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-22 16:55:32,647 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53 2023-05-22 16:55:32,660 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53419, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-05-22 16:55:32,675 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:55:32,677 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-22 16:55:32,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43169] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46427,1684774531608 2023-05-22 16:55:32,681 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/1588230740/info 2023-05-22 16:55:32,682 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-22 16:55:32,683 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:55:32,683 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-22 16:55:32,686 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/1588230740/rep_barrier 2023-05-22 16:55:32,687 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-22 16:55:32,688 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:55:32,689 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-22 16:55:32,691 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/1588230740/table 2023-05-22 16:55:32,692 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-22 16:55:32,693 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:55:32,695 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/1588230740 2023-05-22 16:55:32,697 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/1588230740 2023-05-22 16:55:32,700 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-22 16:55:32,701 DEBUG [RS:0;jenkins-hbase4:46427] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53 2023-05-22 16:55:32,701 DEBUG [RS:0;jenkins-hbase4:46427] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37047 2023-05-22 16:55:32,701 DEBUG [RS:0;jenkins-hbase4:46427] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-22 16:55:32,703 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-22 16:55:32,707 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-22 16:55:32,708 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-22 16:55:32,709 DEBUG [RS:0;jenkins-hbase4:46427] zookeeper.ZKUtil(162): regionserver:46427-0x10053d1ad430001, quorum=127.0.0.1:61798, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46427,1684774531608 2023-05-22 16:55:32,709 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=853173, jitterRate=0.08486564457416534}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-22 16:55:32,709 WARN [RS:0;jenkins-hbase4:46427] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-22 16:55:32,709 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-22 16:55:32,709 INFO [RS:0;jenkins-hbase4:46427] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-22 16:55:32,710 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-22 16:55:32,710 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-22 16:55:32,710 DEBUG [RS:0;jenkins-hbase4:46427] regionserver.HRegionServer(1946): logDir=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/WALs/jenkins-hbase4.apache.org,46427,1684774531608 2023-05-22 16:55:32,710 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-22 16:55:32,710 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-22 16:55:32,710 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-22 16:55:32,712 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-22 16:55:32,712 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46427,1684774531608] 2023-05-22 16:55:32,713 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-22 16:55:32,720 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-22 16:55:32,720 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-22 16:55:32,721 DEBUG [RS:0;jenkins-hbase4:46427] zookeeper.ZKUtil(162): regionserver:46427-0x10053d1ad430001, quorum=127.0.0.1:61798, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46427,1684774531608 2023-05-22 16:55:32,730 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-22 16:55:32,736 DEBUG [RS:0;jenkins-hbase4:46427] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-22 16:55:32,742 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-22 16:55:32,744 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-22 16:55:32,751 INFO [RS:0;jenkins-hbase4:46427] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-22 16:55:32,779 INFO [RS:0;jenkins-hbase4:46427] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-22 16:55:32,782 INFO [RS:0;jenkins-hbase4:46427] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-22 16:55:32,783 INFO [RS:0;jenkins-hbase4:46427] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-22 16:55:32,783 INFO [RS:0;jenkins-hbase4:46427] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-22 16:55:32,790 INFO [RS:0;jenkins-hbase4:46427] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-22 16:55:32,790 DEBUG [RS:0;jenkins-hbase4:46427] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:55:32,791 DEBUG [RS:0;jenkins-hbase4:46427] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:55:32,791 DEBUG [RS:0;jenkins-hbase4:46427] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:55:32,791 DEBUG [RS:0;jenkins-hbase4:46427] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:55:32,791 DEBUG [RS:0;jenkins-hbase4:46427] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:55:32,791 DEBUG [RS:0;jenkins-hbase4:46427] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-22 16:55:32,791 DEBUG [RS:0;jenkins-hbase4:46427] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:55:32,791 DEBUG [RS:0;jenkins-hbase4:46427] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:55:32,791 DEBUG [RS:0;jenkins-hbase4:46427] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:55:32,792 DEBUG [RS:0;jenkins-hbase4:46427] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:55:32,795 INFO [RS:0;jenkins-hbase4:46427] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-22 16:55:32,795 INFO [RS:0;jenkins-hbase4:46427] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-22 16:55:32,796 INFO [RS:0;jenkins-hbase4:46427] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-22 16:55:32,811 INFO [RS:0;jenkins-hbase4:46427] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-22 16:55:32,813 INFO [RS:0;jenkins-hbase4:46427] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46427,1684774531608-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-22 16:55:32,828 INFO [RS:0;jenkins-hbase4:46427] regionserver.Replication(203): jenkins-hbase4.apache.org,46427,1684774531608 started 2023-05-22 16:55:32,828 INFO [RS:0;jenkins-hbase4:46427] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46427,1684774531608, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46427, sessionid=0x10053d1ad430001 2023-05-22 16:55:32,829 DEBUG [RS:0;jenkins-hbase4:46427] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-22 16:55:32,829 DEBUG [RS:0;jenkins-hbase4:46427] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46427,1684774531608 2023-05-22 16:55:32,829 DEBUG [RS:0;jenkins-hbase4:46427] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46427,1684774531608' 2023-05-22 16:55:32,829 DEBUG [RS:0;jenkins-hbase4:46427] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-22 16:55:32,830 DEBUG [RS:0;jenkins-hbase4:46427] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-22 16:55:32,830 DEBUG [RS:0;jenkins-hbase4:46427] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-22 16:55:32,830 DEBUG [RS:0;jenkins-hbase4:46427] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-22 16:55:32,830 DEBUG [RS:0;jenkins-hbase4:46427] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46427,1684774531608 2023-05-22 16:55:32,830 DEBUG [RS:0;jenkins-hbase4:46427] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46427,1684774531608' 2023-05-22 16:55:32,830 DEBUG [RS:0;jenkins-hbase4:46427] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-22 16:55:32,831 DEBUG [RS:0;jenkins-hbase4:46427] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-22 16:55:32,831 DEBUG [RS:0;jenkins-hbase4:46427] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-22 16:55:32,832 INFO [RS:0;jenkins-hbase4:46427] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-22 16:55:32,832 INFO [RS:0;jenkins-hbase4:46427] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-22 16:55:32,897 DEBUG [jenkins-hbase4:43169] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-22 16:55:32,901 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46427,1684774531608, state=OPENING 2023-05-22 16:55:32,910 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-22 16:55:32,911 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:55:32,912 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-22 16:55:32,918 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46427,1684774531608}] 2023-05-22 16:55:32,942 INFO [RS:0;jenkins-hbase4:46427] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46427%2C1684774531608, suffix=, logDir=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/WALs/jenkins-hbase4.apache.org,46427,1684774531608, archiveDir=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/oldWALs, maxLogs=32 2023-05-22 16:55:32,956 INFO [RS:0;jenkins-hbase4:46427] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/WALs/jenkins-hbase4.apache.org,46427,1684774531608/jenkins-hbase4.apache.org%2C46427%2C1684774531608.1684774532946 2023-05-22 16:55:32,956 DEBUG [RS:0;jenkins-hbase4:46427] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34015,DS-69b2633b-f4cb-4531-9587-2e0b081ec070,DISK], DatanodeInfoWithStorage[127.0.0.1:37373,DS-04b25b46-8162-44e7-bc73-f17122f87c99,DISK]] 2023-05-22 16:55:33,100 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46427,1684774531608 2023-05-22 16:55:33,103 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-22 16:55:33,106 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54160, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-22 16:55:33,118 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-22 16:55:33,119 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-22 16:55:33,122 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46427%2C1684774531608.meta, suffix=.meta, logDir=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/WALs/jenkins-hbase4.apache.org,46427,1684774531608, archiveDir=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/oldWALs, maxLogs=32 2023-05-22 16:55:33,135 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/WALs/jenkins-hbase4.apache.org,46427,1684774531608/jenkins-hbase4.apache.org%2C46427%2C1684774531608.meta.1684774533123.meta 2023-05-22 16:55:33,135 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37373,DS-04b25b46-8162-44e7-bc73-f17122f87c99,DISK], DatanodeInfoWithStorage[127.0.0.1:34015,DS-69b2633b-f4cb-4531-9587-2e0b081ec070,DISK]] 2023-05-22 16:55:33,135 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-22 16:55:33,137 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-22 16:55:33,152 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-22 16:55:33,157 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-22 16:55:33,163 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-22 16:55:33,163 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:55:33,163 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-22 16:55:33,163 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-22 16:55:33,166 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-22 16:55:33,168 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/1588230740/info 2023-05-22 16:55:33,168 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/1588230740/info 2023-05-22 16:55:33,169 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-22 16:55:33,169 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:55:33,170 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-22 16:55:33,171 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/1588230740/rep_barrier 2023-05-22 16:55:33,171 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/1588230740/rep_barrier 2023-05-22 16:55:33,171 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-22 16:55:33,172 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:55:33,172 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-22 16:55:33,174 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/1588230740/table 2023-05-22 16:55:33,174 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/1588230740/table 2023-05-22 16:55:33,174 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-22 16:55:33,175 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:55:33,177 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/1588230740 2023-05-22 16:55:33,180 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/1588230740 2023-05-22 16:55:33,183 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-22 16:55:33,185 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-22 16:55:33,186 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=811177, jitterRate=0.031464993953704834}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-22 16:55:33,186 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-22 16:55:33,196 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1684774533093 2023-05-22 16:55:33,212 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-22 16:55:33,213 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-22 16:55:33,213 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46427,1684774531608, state=OPEN 2023-05-22 16:55:33,216 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-22 16:55:33,216 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-22 16:55:33,221 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-22 16:55:33,221 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46427,1684774531608 in 298 msec 2023-05-22 16:55:33,227 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-22 16:55:33,227 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 492 msec 2023-05-22 16:55:33,232 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 720 msec 2023-05-22 16:55:33,232 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1684774533232, completionTime=-1 2023-05-22 16:55:33,233 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-22 16:55:33,233 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-22 16:55:33,308 DEBUG [hconnection-0x413bf17a-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-22 16:55:33,310 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54164, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-22 16:55:33,330 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-22 16:55:33,330 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1684774593330 2023-05-22 16:55:33,331 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1684774653330 2023-05-22 16:55:33,331 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 97 msec 2023-05-22 16:55:33,357 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43169,1684774530436-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-22 16:55:33,357 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43169,1684774530436-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-22 16:55:33,357 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43169,1684774530436-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-22 16:55:33,360 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:43169, period=300000, unit=MILLISECONDS is enabled. 2023-05-22 16:55:33,360 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-22 16:55:33,365 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-22 16:55:33,377 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-22 16:55:33,378 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-22 16:55:33,391 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-22 16:55:33,394 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-22 16:55:33,397 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-22 16:55:33,420 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/.tmp/data/hbase/namespace/9c24de985d8632737e2a52e3a8b11f3b 2023-05-22 16:55:33,422 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/.tmp/data/hbase/namespace/9c24de985d8632737e2a52e3a8b11f3b empty. 2023-05-22 16:55:33,422 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/.tmp/data/hbase/namespace/9c24de985d8632737e2a52e3a8b11f3b 2023-05-22 16:55:33,423 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-22 16:55:33,471 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-22 16:55:33,473 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9c24de985d8632737e2a52e3a8b11f3b, NAME => 'hbase:namespace,,1684774533378.9c24de985d8632737e2a52e3a8b11f3b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/.tmp 2023-05-22 16:55:33,492 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684774533378.9c24de985d8632737e2a52e3a8b11f3b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:55:33,493 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 9c24de985d8632737e2a52e3a8b11f3b, disabling compactions & flushes 2023-05-22 16:55:33,493 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684774533378.9c24de985d8632737e2a52e3a8b11f3b. 2023-05-22 16:55:33,493 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684774533378.9c24de985d8632737e2a52e3a8b11f3b. 2023-05-22 16:55:33,493 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684774533378.9c24de985d8632737e2a52e3a8b11f3b. after waiting 0 ms 2023-05-22 16:55:33,493 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684774533378.9c24de985d8632737e2a52e3a8b11f3b. 2023-05-22 16:55:33,493 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684774533378.9c24de985d8632737e2a52e3a8b11f3b. 2023-05-22 16:55:33,493 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 9c24de985d8632737e2a52e3a8b11f3b: 2023-05-22 16:55:33,497 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-22 16:55:33,513 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1684774533378.9c24de985d8632737e2a52e3a8b11f3b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684774533499"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684774533499"}]},"ts":"1684774533499"} 2023-05-22 16:55:33,544 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-22 16:55:33,546 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-22 16:55:33,552 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684774533546"}]},"ts":"1684774533546"} 2023-05-22 16:55:33,556 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-22 16:55:33,566 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=9c24de985d8632737e2a52e3a8b11f3b, ASSIGN}] 2023-05-22 16:55:33,569 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=9c24de985d8632737e2a52e3a8b11f3b, ASSIGN 2023-05-22 16:55:33,570 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=9c24de985d8632737e2a52e3a8b11f3b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46427,1684774531608; forceNewPlan=false, retain=false 2023-05-22 16:55:33,721 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=9c24de985d8632737e2a52e3a8b11f3b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46427,1684774531608 2023-05-22 16:55:33,722 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1684774533378.9c24de985d8632737e2a52e3a8b11f3b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684774533721"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1684774533721"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684774533721"}]},"ts":"1684774533721"} 2023-05-22 16:55:33,728 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 9c24de985d8632737e2a52e3a8b11f3b, server=jenkins-hbase4.apache.org,46427,1684774531608}] 2023-05-22 16:55:33,890 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1684774533378.9c24de985d8632737e2a52e3a8b11f3b. 2023-05-22 16:55:33,891 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9c24de985d8632737e2a52e3a8b11f3b, NAME => 'hbase:namespace,,1684774533378.9c24de985d8632737e2a52e3a8b11f3b.', STARTKEY => '', ENDKEY => ''} 2023-05-22 16:55:33,893 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 9c24de985d8632737e2a52e3a8b11f3b 2023-05-22 16:55:33,893 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684774533378.9c24de985d8632737e2a52e3a8b11f3b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:55:33,893 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9c24de985d8632737e2a52e3a8b11f3b 2023-05-22 16:55:33,893 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9c24de985d8632737e2a52e3a8b11f3b 2023-05-22 16:55:33,895 INFO [StoreOpener-9c24de985d8632737e2a52e3a8b11f3b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 9c24de985d8632737e2a52e3a8b11f3b 2023-05-22 16:55:33,898 DEBUG [StoreOpener-9c24de985d8632737e2a52e3a8b11f3b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/namespace/9c24de985d8632737e2a52e3a8b11f3b/info 2023-05-22 16:55:33,898 DEBUG [StoreOpener-9c24de985d8632737e2a52e3a8b11f3b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/namespace/9c24de985d8632737e2a52e3a8b11f3b/info 2023-05-22 16:55:33,898 INFO [StoreOpener-9c24de985d8632737e2a52e3a8b11f3b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9c24de985d8632737e2a52e3a8b11f3b columnFamilyName info 2023-05-22 16:55:33,899 INFO [StoreOpener-9c24de985d8632737e2a52e3a8b11f3b-1] regionserver.HStore(310): Store=9c24de985d8632737e2a52e3a8b11f3b/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:55:33,901 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/namespace/9c24de985d8632737e2a52e3a8b11f3b 2023-05-22 16:55:33,902 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/namespace/9c24de985d8632737e2a52e3a8b11f3b 2023-05-22 16:55:33,908 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9c24de985d8632737e2a52e3a8b11f3b 2023-05-22 16:55:33,912 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/namespace/9c24de985d8632737e2a52e3a8b11f3b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-22 16:55:33,913 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9c24de985d8632737e2a52e3a8b11f3b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=689366, jitterRate=-0.12342590093612671}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-22 16:55:33,913 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9c24de985d8632737e2a52e3a8b11f3b: 2023-05-22 16:55:33,917 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1684774533378.9c24de985d8632737e2a52e3a8b11f3b., pid=6, masterSystemTime=1684774533882 2023-05-22 16:55:33,922 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1684774533378.9c24de985d8632737e2a52e3a8b11f3b. 2023-05-22 16:55:33,923 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1684774533378.9c24de985d8632737e2a52e3a8b11f3b. 2023-05-22 16:55:33,924 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=9c24de985d8632737e2a52e3a8b11f3b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46427,1684774531608 2023-05-22 16:55:33,924 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1684774533378.9c24de985d8632737e2a52e3a8b11f3b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684774533923"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1684774533923"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684774533923"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684774533923"}]},"ts":"1684774533923"} 2023-05-22 16:55:33,933 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-22 16:55:33,933 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 9c24de985d8632737e2a52e3a8b11f3b, server=jenkins-hbase4.apache.org,46427,1684774531608 in 201 msec 2023-05-22 16:55:33,937 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-22 16:55:33,938 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=9c24de985d8632737e2a52e3a8b11f3b, ASSIGN in 368 msec 2023-05-22 16:55:33,939 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-22 16:55:33,940 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684774533940"}]},"ts":"1684774533940"} 2023-05-22 16:55:33,944 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-22 16:55:33,951 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-22 16:55:33,954 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 571 msec 2023-05-22 16:55:33,994 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-22 16:55:33,996 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-22 16:55:33,996 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:55:34,037 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-22 16:55:34,056 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-22 16:55:34,063 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 33 msec 2023-05-22 16:55:34,072 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-22 16:55:34,088 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-22 16:55:34,093 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 21 msec 2023-05-22 16:55:34,109 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-22 16:55:34,113 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-22 16:55:34,113 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.423sec 2023-05-22 16:55:34,115 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-22 16:55:34,117 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-22 16:55:34,117 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-22 16:55:34,118 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43169,1684774530436-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-22 16:55:34,118 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43169,1684774530436-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-22 16:55:34,130 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-22 16:55:34,153 DEBUG [Listener at localhost/45465] zookeeper.ReadOnlyZKClient(139): Connect 0x5f9473af to 127.0.0.1:61798 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-22 16:55:34,159 DEBUG [Listener at localhost/45465] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@378e20ab, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-22 16:55:34,174 DEBUG [hconnection-0x678c213f-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-22 16:55:34,189 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54166, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-22 16:55:34,199 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,43169,1684774530436 2023-05-22 16:55:34,199 INFO [Listener at localhost/45465] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:55:34,208 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-22 16:55:34,208 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:55:34,209 INFO [Listener at localhost/45465] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-22 16:55:34,221 DEBUG [Listener at localhost/45465] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-22 16:55:34,225 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49104, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-22 16:55:34,238 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43169] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-22 16:55:34,238 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43169] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-22 16:55:34,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43169] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-22 16:55:34,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43169] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling 2023-05-22 16:55:34,247 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-22 16:55:34,250 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-22 16:55:34,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43169] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testSlowSyncLogRolling" procId is: 9 2023-05-22 16:55:34,255 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab 2023-05-22 16:55:34,257 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab empty. 2023-05-22 16:55:34,259 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab 2023-05-22 16:55:34,259 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testSlowSyncLogRolling regions 2023-05-22 16:55:34,267 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43169] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-22 16:55:34,286 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/.tabledesc/.tableinfo.0000000001 2023-05-22 16:55:34,288 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 64f2ca9b2a31b1b54f8de8f79d0433ab, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/.tmp 2023-05-22 16:55:34,308 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:55:34,308 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1604): Closing 64f2ca9b2a31b1b54f8de8f79d0433ab, disabling compactions & flushes 2023-05-22 16:55:34,308 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab. 2023-05-22 16:55:34,308 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab. 2023-05-22 16:55:34,308 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab. after waiting 0 ms 2023-05-22 16:55:34,308 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab. 2023-05-22 16:55:34,308 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab. 2023-05-22 16:55:34,308 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for 64f2ca9b2a31b1b54f8de8f79d0433ab: 2023-05-22 16:55:34,312 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-22 16:55:34,314 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1684774534314"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684774534314"}]},"ts":"1684774534314"} 2023-05-22 16:55:34,318 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-22 16:55:34,320 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-22 16:55:34,320 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684774534320"}]},"ts":"1684774534320"} 2023-05-22 16:55:34,324 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLING in hbase:meta 2023-05-22 16:55:34,328 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=64f2ca9b2a31b1b54f8de8f79d0433ab, ASSIGN}] 2023-05-22 16:55:34,330 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=64f2ca9b2a31b1b54f8de8f79d0433ab, ASSIGN 2023-05-22 16:55:34,332 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=64f2ca9b2a31b1b54f8de8f79d0433ab, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46427,1684774531608; forceNewPlan=false, retain=false 2023-05-22 16:55:34,483 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=64f2ca9b2a31b1b54f8de8f79d0433ab, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46427,1684774531608 2023-05-22 16:55:34,484 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1684774534483"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1684774534483"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684774534483"}]},"ts":"1684774534483"} 2023-05-22 16:55:34,488 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 64f2ca9b2a31b1b54f8de8f79d0433ab, server=jenkins-hbase4.apache.org,46427,1684774531608}] 2023-05-22 16:55:34,648 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab. 2023-05-22 16:55:34,648 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 64f2ca9b2a31b1b54f8de8f79d0433ab, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab.', STARTKEY => '', ENDKEY => ''} 2023-05-22 16:55:34,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testSlowSyncLogRolling 64f2ca9b2a31b1b54f8de8f79d0433ab 2023-05-22 16:55:34,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:55:34,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 64f2ca9b2a31b1b54f8de8f79d0433ab 2023-05-22 16:55:34,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 64f2ca9b2a31b1b54f8de8f79d0433ab 2023-05-22 16:55:34,652 INFO [StoreOpener-64f2ca9b2a31b1b54f8de8f79d0433ab-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 64f2ca9b2a31b1b54f8de8f79d0433ab 2023-05-22 16:55:34,655 DEBUG [StoreOpener-64f2ca9b2a31b1b54f8de8f79d0433ab-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info 2023-05-22 16:55:34,655 DEBUG [StoreOpener-64f2ca9b2a31b1b54f8de8f79d0433ab-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info 2023-05-22 16:55:34,656 INFO [StoreOpener-64f2ca9b2a31b1b54f8de8f79d0433ab-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 64f2ca9b2a31b1b54f8de8f79d0433ab columnFamilyName info 2023-05-22 16:55:34,657 INFO [StoreOpener-64f2ca9b2a31b1b54f8de8f79d0433ab-1] regionserver.HStore(310): Store=64f2ca9b2a31b1b54f8de8f79d0433ab/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:55:34,660 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab 2023-05-22 16:55:34,661 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab 2023-05-22 16:55:34,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 64f2ca9b2a31b1b54f8de8f79d0433ab 2023-05-22 16:55:34,668 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-22 16:55:34,669 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 64f2ca9b2a31b1b54f8de8f79d0433ab; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=766350, jitterRate=-0.02553558349609375}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-22 16:55:34,670 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 64f2ca9b2a31b1b54f8de8f79d0433ab: 2023-05-22 16:55:34,671 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab., pid=11, masterSystemTime=1684774534641 2023-05-22 16:55:34,675 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab. 2023-05-22 16:55:34,675 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab. 2023-05-22 16:55:34,676 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=64f2ca9b2a31b1b54f8de8f79d0433ab, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46427,1684774531608 2023-05-22 16:55:34,676 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1684774534675"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1684774534675"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684774534675"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684774534675"}]},"ts":"1684774534675"} 2023-05-22 16:55:34,683 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-22 16:55:34,683 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 64f2ca9b2a31b1b54f8de8f79d0433ab, server=jenkins-hbase4.apache.org,46427,1684774531608 in 191 msec 2023-05-22 16:55:34,686 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-22 16:55:34,687 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=64f2ca9b2a31b1b54f8de8f79d0433ab, ASSIGN in 355 msec 2023-05-22 16:55:34,688 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-22 16:55:34,689 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684774534688"}]},"ts":"1684774534688"} 2023-05-22 16:55:34,691 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLED in hbase:meta 2023-05-22 16:55:34,694 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-22 16:55:34,697 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling in 452 msec 2023-05-22 16:55:38,672 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-05-22 16:55:38,744 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-22 16:55:38,745 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-22 16:55:38,746 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testSlowSyncLogRolling' 2023-05-22 16:55:40,589 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-22 16:55:40,589 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-05-22 16:55:44,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43169] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-22 16:55:44,274 INFO [Listener at localhost/45465] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testSlowSyncLogRolling, procId: 9 completed 2023-05-22 16:55:44,278 DEBUG [Listener at localhost/45465] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testSlowSyncLogRolling 2023-05-22 16:55:44,279 DEBUG [Listener at localhost/45465] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab. 2023-05-22 16:55:56,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46427] regionserver.HRegion(9158): Flush requested on 64f2ca9b2a31b1b54f8de8f79d0433ab 2023-05-22 16:55:56,306 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 64f2ca9b2a31b1b54f8de8f79d0433ab 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-22 16:55:56,374 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/.tmp/info/c4f88311cf934123bfc30d4c81085d76 2023-05-22 16:55:56,419 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/.tmp/info/c4f88311cf934123bfc30d4c81085d76 as hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/c4f88311cf934123bfc30d4c81085d76 2023-05-22 16:55:56,429 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/c4f88311cf934123bfc30d4c81085d76, entries=7, sequenceid=11, filesize=12.1 K 2023-05-22 16:55:56,432 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 64f2ca9b2a31b1b54f8de8f79d0433ab in 127ms, sequenceid=11, compaction requested=false 2023-05-22 16:55:56,433 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 64f2ca9b2a31b1b54f8de8f79d0433ab: 2023-05-22 16:56:04,516 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34015,DS-69b2633b-f4cb-4531-9587-2e0b081ec070,DISK], DatanodeInfoWithStorage[127.0.0.1:37373,DS-04b25b46-8162-44e7-bc73-f17122f87c99,DISK]] 2023-05-22 16:56:06,720 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34015,DS-69b2633b-f4cb-4531-9587-2e0b081ec070,DISK], DatanodeInfoWithStorage[127.0.0.1:37373,DS-04b25b46-8162-44e7-bc73-f17122f87c99,DISK]] 2023-05-22 16:56:08,923 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34015,DS-69b2633b-f4cb-4531-9587-2e0b081ec070,DISK], DatanodeInfoWithStorage[127.0.0.1:37373,DS-04b25b46-8162-44e7-bc73-f17122f87c99,DISK]] 2023-05-22 16:56:11,126 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34015,DS-69b2633b-f4cb-4531-9587-2e0b081ec070,DISK], DatanodeInfoWithStorage[127.0.0.1:37373,DS-04b25b46-8162-44e7-bc73-f17122f87c99,DISK]] 2023-05-22 16:56:11,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46427] regionserver.HRegion(9158): Flush requested on 64f2ca9b2a31b1b54f8de8f79d0433ab 2023-05-22 16:56:11,127 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 64f2ca9b2a31b1b54f8de8f79d0433ab 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-22 16:56:11,328 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34015,DS-69b2633b-f4cb-4531-9587-2e0b081ec070,DISK], DatanodeInfoWithStorage[127.0.0.1:37373,DS-04b25b46-8162-44e7-bc73-f17122f87c99,DISK]] 2023-05-22 16:56:11,346 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=21 (bloomFilter=true), to=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/.tmp/info/508c10081b41456586689604ee08674b 2023-05-22 16:56:11,355 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/.tmp/info/508c10081b41456586689604ee08674b as hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/508c10081b41456586689604ee08674b 2023-05-22 16:56:11,362 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/508c10081b41456586689604ee08674b, entries=7, sequenceid=21, filesize=12.1 K 2023-05-22 16:56:11,564 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34015,DS-69b2633b-f4cb-4531-9587-2e0b081ec070,DISK], DatanodeInfoWithStorage[127.0.0.1:37373,DS-04b25b46-8162-44e7-bc73-f17122f87c99,DISK]] 2023-05-22 16:56:11,564 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 64f2ca9b2a31b1b54f8de8f79d0433ab in 438ms, sequenceid=21, compaction requested=false 2023-05-22 16:56:11,565 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 64f2ca9b2a31b1b54f8de8f79d0433ab: 2023-05-22 16:56:11,565 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=24.2 K, sizeToCheck=16.0 K 2023-05-22 16:56:11,565 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-22 16:56:11,566 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/c4f88311cf934123bfc30d4c81085d76 because midkey is the same as first or last row 2023-05-22 16:56:13,329 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34015,DS-69b2633b-f4cb-4531-9587-2e0b081ec070,DISK], DatanodeInfoWithStorage[127.0.0.1:37373,DS-04b25b46-8162-44e7-bc73-f17122f87c99,DISK]] 2023-05-22 16:56:15,532 WARN [sync.4] wal.AbstractFSWAL(1302): Requesting log roll because we exceeded slow sync threshold; count=7, threshold=5, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34015,DS-69b2633b-f4cb-4531-9587-2e0b081ec070,DISK], DatanodeInfoWithStorage[127.0.0.1:37373,DS-04b25b46-8162-44e7-bc73-f17122f87c99,DISK]] 2023-05-22 16:56:15,533 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C46427%2C1684774531608:(num 1684774532946) roll requested 2023-05-22 16:56:15,534 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34015,DS-69b2633b-f4cb-4531-9587-2e0b081ec070,DISK], DatanodeInfoWithStorage[127.0.0.1:37373,DS-04b25b46-8162-44e7-bc73-f17122f87c99,DISK]] 2023-05-22 16:56:15,746 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34015,DS-69b2633b-f4cb-4531-9587-2e0b081ec070,DISK], DatanodeInfoWithStorage[127.0.0.1:37373,DS-04b25b46-8162-44e7-bc73-f17122f87c99,DISK]] 2023-05-22 16:56:15,748 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/WALs/jenkins-hbase4.apache.org,46427,1684774531608/jenkins-hbase4.apache.org%2C46427%2C1684774531608.1684774532946 with entries=24, filesize=20.43 KB; new WAL /user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/WALs/jenkins-hbase4.apache.org,46427,1684774531608/jenkins-hbase4.apache.org%2C46427%2C1684774531608.1684774575534 2023-05-22 16:56:15,748 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34015,DS-69b2633b-f4cb-4531-9587-2e0b081ec070,DISK], DatanodeInfoWithStorage[127.0.0.1:37373,DS-04b25b46-8162-44e7-bc73-f17122f87c99,DISK]] 2023-05-22 16:56:15,749 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/WALs/jenkins-hbase4.apache.org,46427,1684774531608/jenkins-hbase4.apache.org%2C46427%2C1684774531608.1684774532946 is not closed yet, will try archiving it next time 2023-05-22 16:56:25,546 INFO [Listener at localhost/45465] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-05-22 16:56:30,549 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34015,DS-69b2633b-f4cb-4531-9587-2e0b081ec070,DISK], DatanodeInfoWithStorage[127.0.0.1:37373,DS-04b25b46-8162-44e7-bc73-f17122f87c99,DISK]] 2023-05-22 16:56:30,549 WARN [sync.0] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34015,DS-69b2633b-f4cb-4531-9587-2e0b081ec070,DISK], DatanodeInfoWithStorage[127.0.0.1:37373,DS-04b25b46-8162-44e7-bc73-f17122f87c99,DISK]] 2023-05-22 16:56:30,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46427] regionserver.HRegion(9158): Flush requested on 64f2ca9b2a31b1b54f8de8f79d0433ab 2023-05-22 16:56:30,549 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C46427%2C1684774531608:(num 1684774575534) roll requested 2023-05-22 16:56:30,549 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 64f2ca9b2a31b1b54f8de8f79d0433ab 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-22 16:56:32,550 INFO [Listener at localhost/45465] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-05-22 16:56:35,551 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 5001 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34015,DS-69b2633b-f4cb-4531-9587-2e0b081ec070,DISK], DatanodeInfoWithStorage[127.0.0.1:37373,DS-04b25b46-8162-44e7-bc73-f17122f87c99,DISK]] 2023-05-22 16:56:35,551 WARN [sync.1] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5001 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34015,DS-69b2633b-f4cb-4531-9587-2e0b081ec070,DISK], DatanodeInfoWithStorage[127.0.0.1:37373,DS-04b25b46-8162-44e7-bc73-f17122f87c99,DISK]] 2023-05-22 16:56:35,565 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34015,DS-69b2633b-f4cb-4531-9587-2e0b081ec070,DISK], DatanodeInfoWithStorage[127.0.0.1:37373,DS-04b25b46-8162-44e7-bc73-f17122f87c99,DISK]] 2023-05-22 16:56:35,566 WARN [sync.2] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34015,DS-69b2633b-f4cb-4531-9587-2e0b081ec070,DISK], DatanodeInfoWithStorage[127.0.0.1:37373,DS-04b25b46-8162-44e7-bc73-f17122f87c99,DISK]] 2023-05-22 16:56:35,568 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/WALs/jenkins-hbase4.apache.org,46427,1684774531608/jenkins-hbase4.apache.org%2C46427%2C1684774531608.1684774575534 with entries=6, filesize=6.07 KB; new WAL /user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/WALs/jenkins-hbase4.apache.org,46427,1684774531608/jenkins-hbase4.apache.org%2C46427%2C1684774531608.1684774590549 2023-05-22 16:56:35,568 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37373,DS-04b25b46-8162-44e7-bc73-f17122f87c99,DISK], DatanodeInfoWithStorage[127.0.0.1:34015,DS-69b2633b-f4cb-4531-9587-2e0b081ec070,DISK]] 2023-05-22 16:56:35,568 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/WALs/jenkins-hbase4.apache.org,46427,1684774531608/jenkins-hbase4.apache.org%2C46427%2C1684774531608.1684774575534 is not closed yet, will try archiving it next time 2023-05-22 16:56:35,570 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=31 (bloomFilter=true), to=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/.tmp/info/3a0df4588a064366aad628d68081e935 2023-05-22 16:56:35,580 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/.tmp/info/3a0df4588a064366aad628d68081e935 as hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/3a0df4588a064366aad628d68081e935 2023-05-22 16:56:35,588 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/3a0df4588a064366aad628d68081e935, entries=7, sequenceid=31, filesize=12.1 K 2023-05-22 16:56:35,590 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 64f2ca9b2a31b1b54f8de8f79d0433ab in 5041ms, sequenceid=31, compaction requested=true 2023-05-22 16:56:35,590 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 64f2ca9b2a31b1b54f8de8f79d0433ab: 2023-05-22 16:56:35,590 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=36.3 K, sizeToCheck=16.0 K 2023-05-22 16:56:35,590 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-22 16:56:35,590 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/c4f88311cf934123bfc30d4c81085d76 because midkey is the same as first or last row 2023-05-22 16:56:35,592 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-22 16:56:35,593 DEBUG [RS:0;jenkins-hbase4:46427-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-22 16:56:35,597 DEBUG [RS:0;jenkins-hbase4:46427-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 37197 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-22 16:56:35,599 DEBUG [RS:0;jenkins-hbase4:46427-shortCompactions-0] regionserver.HStore(1912): 64f2ca9b2a31b1b54f8de8f79d0433ab/info is initiating minor compaction (all files) 2023-05-22 16:56:35,600 INFO [RS:0;jenkins-hbase4:46427-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 64f2ca9b2a31b1b54f8de8f79d0433ab/info in TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab. 2023-05-22 16:56:35,600 INFO [RS:0;jenkins-hbase4:46427-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/c4f88311cf934123bfc30d4c81085d76, hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/508c10081b41456586689604ee08674b, hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/3a0df4588a064366aad628d68081e935] into tmpdir=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/.tmp, totalSize=36.3 K 2023-05-22 16:56:35,601 DEBUG [RS:0;jenkins-hbase4:46427-shortCompactions-0] compactions.Compactor(207): Compacting c4f88311cf934123bfc30d4c81085d76, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1684774544283 2023-05-22 16:56:35,602 DEBUG [RS:0;jenkins-hbase4:46427-shortCompactions-0] compactions.Compactor(207): Compacting 508c10081b41456586689604ee08674b, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=21, earliestPutTs=1684774558306 2023-05-22 16:56:35,603 DEBUG [RS:0;jenkins-hbase4:46427-shortCompactions-0] compactions.Compactor(207): Compacting 3a0df4588a064366aad628d68081e935, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=31, earliestPutTs=1684774573128 2023-05-22 16:56:35,630 INFO [RS:0;jenkins-hbase4:46427-shortCompactions-0] throttle.PressureAwareThroughputController(145): 64f2ca9b2a31b1b54f8de8f79d0433ab#info#compaction#3 average throughput is 21.55 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-22 16:56:35,649 DEBUG [RS:0;jenkins-hbase4:46427-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/.tmp/info/f230e3425af84735b9afd6608c686406 as hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/f230e3425af84735b9afd6608c686406 2023-05-22 16:56:35,669 INFO [RS:0;jenkins-hbase4:46427-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 64f2ca9b2a31b1b54f8de8f79d0433ab/info of 64f2ca9b2a31b1b54f8de8f79d0433ab into f230e3425af84735b9afd6608c686406(size=27.0 K), total size for store is 27.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-22 16:56:35,669 DEBUG [RS:0;jenkins-hbase4:46427-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 64f2ca9b2a31b1b54f8de8f79d0433ab: 2023-05-22 16:56:35,669 INFO [RS:0;jenkins-hbase4:46427-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab., storeName=64f2ca9b2a31b1b54f8de8f79d0433ab/info, priority=13, startTime=1684774595592; duration=0sec 2023-05-22 16:56:35,670 DEBUG [RS:0;jenkins-hbase4:46427-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=27.0 K, sizeToCheck=16.0 K 2023-05-22 16:56:35,671 DEBUG [RS:0;jenkins-hbase4:46427-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-22 16:56:35,671 DEBUG [RS:0;jenkins-hbase4:46427-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/f230e3425af84735b9afd6608c686406 because midkey is the same as first or last row 2023-05-22 16:56:35,671 DEBUG [RS:0;jenkins-hbase4:46427-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-22 16:56:47,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46427] regionserver.HRegion(9158): Flush requested on 64f2ca9b2a31b1b54f8de8f79d0433ab 2023-05-22 16:56:47,671 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 64f2ca9b2a31b1b54f8de8f79d0433ab 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-22 16:56:47,689 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=42 (bloomFilter=true), to=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/.tmp/info/af2d03c332454508ae7bc6d512948e76 2023-05-22 16:56:47,698 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/.tmp/info/af2d03c332454508ae7bc6d512948e76 as hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/af2d03c332454508ae7bc6d512948e76 2023-05-22 16:56:47,706 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/af2d03c332454508ae7bc6d512948e76, entries=7, sequenceid=42, filesize=12.1 K 2023-05-22 16:56:47,707 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 64f2ca9b2a31b1b54f8de8f79d0433ab in 36ms, sequenceid=42, compaction requested=false 2023-05-22 16:56:47,708 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 64f2ca9b2a31b1b54f8de8f79d0433ab: 2023-05-22 16:56:47,708 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=39.1 K, sizeToCheck=16.0 K 2023-05-22 16:56:47,708 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-22 16:56:47,708 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/f230e3425af84735b9afd6608c686406 because midkey is the same as first or last row 2023-05-22 16:56:55,680 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-22 16:56:55,680 INFO [Listener at localhost/45465] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-22 16:56:55,680 DEBUG [Listener at localhost/45465] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5f9473af to 127.0.0.1:61798 2023-05-22 16:56:55,681 DEBUG [Listener at localhost/45465] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 16:56:55,681 DEBUG [Listener at localhost/45465] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-22 16:56:55,681 DEBUG [Listener at localhost/45465] util.JVMClusterUtil(257): Found active master hash=759773959, stopped=false 2023-05-22 16:56:55,681 INFO [Listener at localhost/45465] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,43169,1684774530436 2023-05-22 16:56:55,684 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): regionserver:46427-0x10053d1ad430001, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-22 16:56:55,684 INFO [Listener at localhost/45465] procedure2.ProcedureExecutor(629): Stopping 2023-05-22 16:56:55,684 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-22 16:56:55,684 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:56:55,684 DEBUG [Listener at localhost/45465] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6084d118 to 127.0.0.1:61798 2023-05-22 16:56:55,685 DEBUG [Listener at localhost/45465] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 16:56:55,685 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46427-0x10053d1ad430001, quorum=127.0.0.1:61798, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 16:56:55,685 INFO [Listener at localhost/45465] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,46427,1684774531608' ***** 2023-05-22 16:56:55,685 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 16:56:55,685 INFO [Listener at localhost/45465] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-22 16:56:55,685 INFO [RS:0;jenkins-hbase4:46427] regionserver.HeapMemoryManager(220): Stopping 2023-05-22 16:56:55,686 INFO [RS:0;jenkins-hbase4:46427] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-22 16:56:55,686 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-22 16:56:55,686 INFO [RS:0;jenkins-hbase4:46427] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-22 16:56:55,686 INFO [RS:0;jenkins-hbase4:46427] regionserver.HRegionServer(3303): Received CLOSE for 64f2ca9b2a31b1b54f8de8f79d0433ab 2023-05-22 16:56:55,687 INFO [RS:0;jenkins-hbase4:46427] regionserver.HRegionServer(3303): Received CLOSE for 9c24de985d8632737e2a52e3a8b11f3b 2023-05-22 16:56:55,687 INFO [RS:0;jenkins-hbase4:46427] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46427,1684774531608 2023-05-22 16:56:55,687 DEBUG [RS:0;jenkins-hbase4:46427] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3c23f8af to 127.0.0.1:61798 2023-05-22 16:56:55,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 64f2ca9b2a31b1b54f8de8f79d0433ab, disabling compactions & flushes 2023-05-22 16:56:55,687 DEBUG [RS:0;jenkins-hbase4:46427] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 16:56:55,687 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab. 2023-05-22 16:56:55,687 INFO [RS:0;jenkins-hbase4:46427] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-22 16:56:55,687 INFO [RS:0;jenkins-hbase4:46427] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-22 16:56:55,687 INFO [RS:0;jenkins-hbase4:46427] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-22 16:56:55,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab. 2023-05-22 16:56:55,687 INFO [RS:0;jenkins-hbase4:46427] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-22 16:56:55,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab. after waiting 0 ms 2023-05-22 16:56:55,688 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab. 2023-05-22 16:56:55,688 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 64f2ca9b2a31b1b54f8de8f79d0433ab 1/1 column families, dataSize=3.15 KB heapSize=3.63 KB 2023-05-22 16:56:55,688 INFO [RS:0;jenkins-hbase4:46427] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-22 16:56:55,688 DEBUG [RS:0;jenkins-hbase4:46427] regionserver.HRegionServer(1478): Online Regions={64f2ca9b2a31b1b54f8de8f79d0433ab=TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab., 1588230740=hbase:meta,,1.1588230740, 9c24de985d8632737e2a52e3a8b11f3b=hbase:namespace,,1684774533378.9c24de985d8632737e2a52e3a8b11f3b.} 2023-05-22 16:56:55,688 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-22 16:56:55,689 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-22 16:56:55,689 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-22 16:56:55,689 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-22 16:56:55,689 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-22 16:56:55,689 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.87 KB heapSize=5.38 KB 2023-05-22 16:56:55,690 DEBUG [RS:0;jenkins-hbase4:46427] regionserver.HRegionServer(1504): Waiting on 1588230740, 64f2ca9b2a31b1b54f8de8f79d0433ab, 9c24de985d8632737e2a52e3a8b11f3b 2023-05-22 16:56:55,708 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.15 KB at sequenceid=48 (bloomFilter=true), to=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/.tmp/info/b964ab83137d41fbaf33b2fca51b856a 2023-05-22 16:56:55,710 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.64 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/1588230740/.tmp/info/8ed4b65b7a984d3f871d2468e72e71ef 2023-05-22 16:56:55,717 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/.tmp/info/b964ab83137d41fbaf33b2fca51b856a as hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/b964ab83137d41fbaf33b2fca51b856a 2023-05-22 16:56:55,725 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/b964ab83137d41fbaf33b2fca51b856a, entries=3, sequenceid=48, filesize=7.9 K 2023-05-22 16:56:55,727 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.15 KB/3228, heapSize ~3.61 KB/3696, currentSize=0 B/0 for 64f2ca9b2a31b1b54f8de8f79d0433ab in 39ms, sequenceid=48, compaction requested=true 2023-05-22 16:56:55,730 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/c4f88311cf934123bfc30d4c81085d76, hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/508c10081b41456586689604ee08674b, hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/3a0df4588a064366aad628d68081e935] to archive 2023-05-22 16:56:55,733 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-22 16:56:55,734 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=232 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/1588230740/.tmp/table/3f984a201ca041f3a32436e205523b0a 2023-05-22 16:56:55,740 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/c4f88311cf934123bfc30d4c81085d76 to hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/archive/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/c4f88311cf934123bfc30d4c81085d76 2023-05-22 16:56:55,742 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/508c10081b41456586689604ee08674b to hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/archive/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/508c10081b41456586689604ee08674b 2023-05-22 16:56:55,743 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/1588230740/.tmp/info/8ed4b65b7a984d3f871d2468e72e71ef as hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/1588230740/info/8ed4b65b7a984d3f871d2468e72e71ef 2023-05-22 16:56:55,746 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/3a0df4588a064366aad628d68081e935 to hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/archive/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/info/3a0df4588a064366aad628d68081e935 2023-05-22 16:56:55,752 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/1588230740/info/8ed4b65b7a984d3f871d2468e72e71ef, entries=20, sequenceid=14, filesize=7.4 K 2023-05-22 16:56:55,753 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/1588230740/.tmp/table/3f984a201ca041f3a32436e205523b0a as hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/1588230740/table/3f984a201ca041f3a32436e205523b0a 2023-05-22 16:56:55,759 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/1588230740/table/3f984a201ca041f3a32436e205523b0a, entries=4, sequenceid=14, filesize=4.8 K 2023-05-22 16:56:55,760 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.87 KB/2934, heapSize ~5.09 KB/5216, currentSize=0 B/0 for 1588230740 in 71ms, sequenceid=14, compaction requested=false 2023-05-22 16:56:55,769 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-05-22 16:56:55,770 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-22 16:56:55,773 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-22 16:56:55,773 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-22 16:56:55,773 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-22 16:56:55,776 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/default/TestLogRolling-testSlowSyncLogRolling/64f2ca9b2a31b1b54f8de8f79d0433ab/recovered.edits/51.seqid, newMaxSeqId=51, maxSeqId=1 2023-05-22 16:56:55,778 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab. 2023-05-22 16:56:55,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 64f2ca9b2a31b1b54f8de8f79d0433ab: 2023-05-22 16:56:55,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testSlowSyncLogRolling,,1684774534238.64f2ca9b2a31b1b54f8de8f79d0433ab. 2023-05-22 16:56:55,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9c24de985d8632737e2a52e3a8b11f3b, disabling compactions & flushes 2023-05-22 16:56:55,779 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684774533378.9c24de985d8632737e2a52e3a8b11f3b. 2023-05-22 16:56:55,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684774533378.9c24de985d8632737e2a52e3a8b11f3b. 2023-05-22 16:56:55,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684774533378.9c24de985d8632737e2a52e3a8b11f3b. after waiting 0 ms 2023-05-22 16:56:55,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684774533378.9c24de985d8632737e2a52e3a8b11f3b. 2023-05-22 16:56:55,779 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 9c24de985d8632737e2a52e3a8b11f3b 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-22 16:56:55,794 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/namespace/9c24de985d8632737e2a52e3a8b11f3b/.tmp/info/c96fc9e67f73414f835dee49a18ccc27 2023-05-22 16:56:55,796 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-05-22 16:56:55,796 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-05-22 16:56:55,801 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/namespace/9c24de985d8632737e2a52e3a8b11f3b/.tmp/info/c96fc9e67f73414f835dee49a18ccc27 as hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/namespace/9c24de985d8632737e2a52e3a8b11f3b/info/c96fc9e67f73414f835dee49a18ccc27 2023-05-22 16:56:55,808 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/namespace/9c24de985d8632737e2a52e3a8b11f3b/info/c96fc9e67f73414f835dee49a18ccc27, entries=2, sequenceid=6, filesize=4.8 K 2023-05-22 16:56:55,809 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 9c24de985d8632737e2a52e3a8b11f3b in 30ms, sequenceid=6, compaction requested=false 2023-05-22 16:56:55,815 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/data/hbase/namespace/9c24de985d8632737e2a52e3a8b11f3b/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-22 16:56:55,816 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684774533378.9c24de985d8632737e2a52e3a8b11f3b. 2023-05-22 16:56:55,816 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9c24de985d8632737e2a52e3a8b11f3b: 2023-05-22 16:56:55,816 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1684774533378.9c24de985d8632737e2a52e3a8b11f3b. 2023-05-22 16:56:55,890 INFO [RS:0;jenkins-hbase4:46427] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46427,1684774531608; all regions closed. 2023-05-22 16:56:55,891 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/WALs/jenkins-hbase4.apache.org,46427,1684774531608 2023-05-22 16:56:56,300 DEBUG [RS:0;jenkins-hbase4:46427] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/oldWALs 2023-05-22 16:56:56,300 INFO [RS:0;jenkins-hbase4:46427] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C46427%2C1684774531608.meta:.meta(num 1684774533123) 2023-05-22 16:56:56,301 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/WALs/jenkins-hbase4.apache.org,46427,1684774531608 2023-05-22 16:56:56,311 DEBUG [RS:0;jenkins-hbase4:46427] wal.AbstractFSWAL(1028): Moved 3 WAL file(s) to /user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/oldWALs 2023-05-22 16:56:56,311 INFO [RS:0;jenkins-hbase4:46427] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C46427%2C1684774531608:(num 1684774590549) 2023-05-22 16:56:56,311 DEBUG [RS:0;jenkins-hbase4:46427] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 16:56:56,311 INFO [RS:0;jenkins-hbase4:46427] regionserver.LeaseManager(133): Closed leases 2023-05-22 16:56:56,311 INFO [RS:0;jenkins-hbase4:46427] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-22 16:56:56,312 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-22 16:56:56,312 INFO [RS:0;jenkins-hbase4:46427] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46427 2023-05-22 16:56:56,319 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): regionserver:46427-0x10053d1ad430001, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46427,1684774531608 2023-05-22 16:56:56,319 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-22 16:56:56,319 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): regionserver:46427-0x10053d1ad430001, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-22 16:56:56,321 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46427,1684774531608] 2023-05-22 16:56:56,322 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46427,1684774531608; numProcessing=1 2023-05-22 16:56:56,324 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46427,1684774531608 already deleted, retry=false 2023-05-22 16:56:56,324 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46427,1684774531608 expired; onlineServers=0 2023-05-22 16:56:56,324 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,43169,1684774530436' ***** 2023-05-22 16:56:56,324 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-22 16:56:56,324 DEBUG [M:0;jenkins-hbase4:43169] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6c74313c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-22 16:56:56,325 INFO [M:0;jenkins-hbase4:43169] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43169,1684774530436 2023-05-22 16:56:56,325 INFO [M:0;jenkins-hbase4:43169] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43169,1684774530436; all regions closed. 2023-05-22 16:56:56,325 DEBUG [M:0;jenkins-hbase4:43169] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 16:56:56,325 DEBUG [M:0;jenkins-hbase4:43169] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-22 16:56:56,325 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-22 16:56:56,325 DEBUG [M:0;jenkins-hbase4:43169] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-22 16:56:56,325 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1684774532635] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1684774532635,5,FailOnTimeoutGroup] 2023-05-22 16:56:56,326 INFO [M:0;jenkins-hbase4:43169] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-22 16:56:56,326 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1684774532636] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1684774532636,5,FailOnTimeoutGroup] 2023-05-22 16:56:56,327 INFO [M:0;jenkins-hbase4:43169] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-22 16:56:56,327 INFO [M:0;jenkins-hbase4:43169] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-22 16:56:56,328 DEBUG [M:0;jenkins-hbase4:43169] master.HMaster(1512): Stopping service threads 2023-05-22 16:56:56,328 INFO [M:0;jenkins-hbase4:43169] procedure2.RemoteProcedureDispatcher(118): Stopping procedure remote dispatcher 2023-05-22 16:56:56,328 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-22 16:56:56,329 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:56:56,329 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-22 16:56:56,329 INFO [M:0;jenkins-hbase4:43169] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-22 16:56:56,329 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-22 16:56:56,329 DEBUG [M:0;jenkins-hbase4:43169] zookeeper.ZKUtil(398): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-22 16:56:56,330 WARN [M:0;jenkins-hbase4:43169] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-22 16:56:56,330 INFO [M:0;jenkins-hbase4:43169] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-22 16:56:56,330 INFO [M:0;jenkins-hbase4:43169] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-22 16:56:56,330 DEBUG [M:0;jenkins-hbase4:43169] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-22 16:56:56,331 INFO [M:0;jenkins-hbase4:43169] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:56:56,331 DEBUG [M:0;jenkins-hbase4:43169] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:56:56,331 DEBUG [M:0;jenkins-hbase4:43169] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-22 16:56:56,331 DEBUG [M:0;jenkins-hbase4:43169] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:56:56,331 INFO [M:0;jenkins-hbase4:43169] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.27 KB heapSize=46.71 KB 2023-05-22 16:56:56,346 INFO [M:0;jenkins-hbase4:43169] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.27 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/1ef48d41268b4cdc82fcec84f0532729 2023-05-22 16:56:56,352 INFO [M:0;jenkins-hbase4:43169] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1ef48d41268b4cdc82fcec84f0532729 2023-05-22 16:56:56,353 DEBUG [M:0;jenkins-hbase4:43169] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/1ef48d41268b4cdc82fcec84f0532729 as hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/1ef48d41268b4cdc82fcec84f0532729 2023-05-22 16:56:56,359 INFO [M:0;jenkins-hbase4:43169] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1ef48d41268b4cdc82fcec84f0532729 2023-05-22 16:56:56,360 INFO [M:0;jenkins-hbase4:43169] regionserver.HStore(1080): Added hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/1ef48d41268b4cdc82fcec84f0532729, entries=11, sequenceid=100, filesize=6.1 K 2023-05-22 16:56:56,361 INFO [M:0;jenkins-hbase4:43169] regionserver.HRegion(2948): Finished flush of dataSize ~38.27 KB/39184, heapSize ~46.70 KB/47816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 30ms, sequenceid=100, compaction requested=false 2023-05-22 16:56:56,362 INFO [M:0;jenkins-hbase4:43169] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:56:56,362 DEBUG [M:0;jenkins-hbase4:43169] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-22 16:56:56,362 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/MasterData/WALs/jenkins-hbase4.apache.org,43169,1684774530436 2023-05-22 16:56:56,367 INFO [M:0;jenkins-hbase4:43169] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-22 16:56:56,367 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-22 16:56:56,367 INFO [M:0;jenkins-hbase4:43169] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43169 2023-05-22 16:56:56,370 DEBUG [M:0;jenkins-hbase4:43169] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,43169,1684774530436 already deleted, retry=false 2023-05-22 16:56:56,421 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): regionserver:46427-0x10053d1ad430001, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 16:56:56,421 INFO [RS:0;jenkins-hbase4:46427] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46427,1684774531608; zookeeper connection closed. 2023-05-22 16:56:56,421 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): regionserver:46427-0x10053d1ad430001, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 16:56:56,422 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@67fae407] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@67fae407 2023-05-22 16:56:56,422 INFO [Listener at localhost/45465] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-22 16:56:56,521 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 16:56:56,521 INFO [M:0;jenkins-hbase4:43169] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43169,1684774530436; zookeeper connection closed. 2023-05-22 16:56:56,521 DEBUG [Listener at localhost/45465-EventThread] zookeeper.ZKWatcher(600): master:43169-0x10053d1ad430000, quorum=127.0.0.1:61798, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 16:56:56,523 WARN [Listener at localhost/45465] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-22 16:56:56,526 INFO [Listener at localhost/45465] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 16:56:56,631 WARN [BP-1252424714-172.31.14.131-1684774527478 heartbeating to localhost/127.0.0.1:37047] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-22 16:56:56,631 WARN [BP-1252424714-172.31.14.131-1684774527478 heartbeating to localhost/127.0.0.1:37047] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1252424714-172.31.14.131-1684774527478 (Datanode Uuid a40152c2-af2c-4a62-914e-71ae7417cdb8) service to localhost/127.0.0.1:37047 2023-05-22 16:56:56,633 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/cluster_0d2d255a-1426-ce35-eedb-4562ca2c4eca/dfs/data/data3/current/BP-1252424714-172.31.14.131-1684774527478] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:56:56,633 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/cluster_0d2d255a-1426-ce35-eedb-4562ca2c4eca/dfs/data/data4/current/BP-1252424714-172.31.14.131-1684774527478] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:56:56,634 WARN [Listener at localhost/45465] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-22 16:56:56,636 INFO [Listener at localhost/45465] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 16:56:56,739 WARN [BP-1252424714-172.31.14.131-1684774527478 heartbeating to localhost/127.0.0.1:37047] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-22 16:56:56,740 WARN [BP-1252424714-172.31.14.131-1684774527478 heartbeating to localhost/127.0.0.1:37047] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1252424714-172.31.14.131-1684774527478 (Datanode Uuid 8f78caec-345f-47eb-bc9a-fe13d72c8b6a) service to localhost/127.0.0.1:37047 2023-05-22 16:56:56,740 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/cluster_0d2d255a-1426-ce35-eedb-4562ca2c4eca/dfs/data/data1/current/BP-1252424714-172.31.14.131-1684774527478] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:56:56,741 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/cluster_0d2d255a-1426-ce35-eedb-4562ca2c4eca/dfs/data/data2/current/BP-1252424714-172.31.14.131-1684774527478] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:56:56,774 INFO [Listener at localhost/45465] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 16:56:56,799 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-22 16:56:56,886 INFO [Listener at localhost/45465] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-22 16:56:56,919 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-22 16:56:56,930 INFO [Listener at localhost/45465] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=50 (was 10) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.PeerCache@7d204cbe java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:37047 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (466462416) connection to localhost/127.0.0.1:37047 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-1-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SessionTracker java.lang.Thread.sleep(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:151) Potentially hanging thread: org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3693) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcClient-timer-pool-0 java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:600) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:496) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HBase-Metrics2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Monitor thread for TaskMonitor java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:327) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: nioEventLoopGroup-3-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.0@localhost:37047 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-2 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (466462416) connection to localhost/127.0.0.1:37047 from jenkins.hfs.0 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-3-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Idle-Rpc-Conn-Sweeper-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (466462416) connection to localhost/127.0.0.1:37047 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: regionserver/jenkins-hbase4:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Potentially hanging thread: nioEventLoopGroup-4-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45465 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SnapshotHandlerChoreCleaner sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=439 (was 264) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=91 (was 211), ProcessCount=169 (was 169), AvailableMemoryMB=5959 (was 6603) 2023-05-22 16:56:56,938 INFO [Listener at localhost/45465] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=51, OpenFileDescriptor=439, MaxFileDescriptor=60000, SystemLoadAverage=91, ProcessCount=169, AvailableMemoryMB=5958 2023-05-22 16:56:56,939 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-22 16:56:56,939 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/hadoop.log.dir so I do NOT create it in target/test-data/5570634d-4595-b066-964e-41db08c8ccde 2023-05-22 16:56:56,939 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8ee6b381-01cc-dcda-09d3-02ae80530a20/hadoop.tmp.dir so I do NOT create it in target/test-data/5570634d-4595-b066-964e-41db08c8ccde 2023-05-22 16:56:56,939 INFO [Listener at localhost/45465] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182, deleteOnExit=true 2023-05-22 16:56:56,939 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-22 16:56:56,939 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/test.cache.data in system properties and HBase conf 2023-05-22 16:56:56,939 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/hadoop.tmp.dir in system properties and HBase conf 2023-05-22 16:56:56,939 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/hadoop.log.dir in system properties and HBase conf 2023-05-22 16:56:56,940 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-22 16:56:56,940 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-22 16:56:56,940 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-22 16:56:56,940 DEBUG [Listener at localhost/45465] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-22 16:56:56,940 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-22 16:56:56,940 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-22 16:56:56,940 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-22 16:56:56,941 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-22 16:56:56,941 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-22 16:56:56,941 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-22 16:56:56,941 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-22 16:56:56,941 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-22 16:56:56,941 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-22 16:56:56,941 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/nfs.dump.dir in system properties and HBase conf 2023-05-22 16:56:56,941 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/java.io.tmpdir in system properties and HBase conf 2023-05-22 16:56:56,941 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-22 16:56:56,942 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-22 16:56:56,942 INFO [Listener at localhost/45465] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-22 16:56:56,943 WARN [Listener at localhost/45465] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-22 16:56:56,946 WARN [Listener at localhost/45465] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-22 16:56:56,946 WARN [Listener at localhost/45465] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-22 16:56:56,991 WARN [Listener at localhost/45465] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 16:56:56,993 INFO [Listener at localhost/45465] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 16:56:57,011 INFO [Listener at localhost/45465] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/java.io.tmpdir/Jetty_localhost_39153_hdfs____408jyr/webapp 2023-05-22 16:56:57,101 INFO [Listener at localhost/45465] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39153 2023-05-22 16:56:57,103 WARN [Listener at localhost/45465] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-22 16:56:57,107 WARN [Listener at localhost/45465] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-22 16:56:57,107 WARN [Listener at localhost/45465] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-22 16:56:57,155 WARN [Listener at localhost/36627] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 16:56:57,165 WARN [Listener at localhost/36627] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-22 16:56:57,168 WARN [Listener at localhost/36627] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 16:56:57,169 INFO [Listener at localhost/36627] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 16:56:57,174 INFO [Listener at localhost/36627] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/java.io.tmpdir/Jetty_localhost_35309_datanode____.kbzd4v/webapp 2023-05-22 16:56:57,265 INFO [Listener at localhost/36627] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35309 2023-05-22 16:56:57,273 WARN [Listener at localhost/43569] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 16:56:57,288 WARN [Listener at localhost/43569] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-22 16:56:57,290 WARN [Listener at localhost/43569] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 16:56:57,292 INFO [Listener at localhost/43569] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 16:56:57,297 INFO [Listener at localhost/43569] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/java.io.tmpdir/Jetty_localhost_37055_datanode____7nepcj/webapp 2023-05-22 16:56:57,372 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf772d995446d14e1: Processing first storage report for DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62 from datanode bc7c7a9a-0e0c-454e-81f5-bead530557e5 2023-05-22 16:56:57,372 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf772d995446d14e1: from storage DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62 node DatanodeRegistration(127.0.0.1:41761, datanodeUuid=bc7c7a9a-0e0c-454e-81f5-bead530557e5, infoPort=34141, infoSecurePort=0, ipcPort=43569, storageInfo=lv=-57;cid=testClusterID;nsid=1477914146;c=1684774616949), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:56:57,373 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf772d995446d14e1: Processing first storage report for DS-74e1725b-0ad8-41bd-9ac4-de8fa9da30b5 from datanode bc7c7a9a-0e0c-454e-81f5-bead530557e5 2023-05-22 16:56:57,373 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf772d995446d14e1: from storage DS-74e1725b-0ad8-41bd-9ac4-de8fa9da30b5 node DatanodeRegistration(127.0.0.1:41761, datanodeUuid=bc7c7a9a-0e0c-454e-81f5-bead530557e5, infoPort=34141, infoSecurePort=0, ipcPort=43569, storageInfo=lv=-57;cid=testClusterID;nsid=1477914146;c=1684774616949), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:56:57,403 INFO [Listener at localhost/43569] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37055 2023-05-22 16:56:57,410 WARN [Listener at localhost/39615] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 16:56:57,513 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x42e4f45414d8bd21: Processing first storage report for DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0 from datanode 358b80bc-69af-4fb5-a138-b4360464ab8a 2023-05-22 16:56:57,513 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x42e4f45414d8bd21: from storage DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0 node DatanodeRegistration(127.0.0.1:43407, datanodeUuid=358b80bc-69af-4fb5-a138-b4360464ab8a, infoPort=44809, infoSecurePort=0, ipcPort=39615, storageInfo=lv=-57;cid=testClusterID;nsid=1477914146;c=1684774616949), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:56:57,513 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x42e4f45414d8bd21: Processing first storage report for DS-463143ee-5a8a-4f93-af79-e4a0ad4be6f4 from datanode 358b80bc-69af-4fb5-a138-b4360464ab8a 2023-05-22 16:56:57,513 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x42e4f45414d8bd21: from storage DS-463143ee-5a8a-4f93-af79-e4a0ad4be6f4 node DatanodeRegistration(127.0.0.1:43407, datanodeUuid=358b80bc-69af-4fb5-a138-b4360464ab8a, infoPort=44809, infoSecurePort=0, ipcPort=39615, storageInfo=lv=-57;cid=testClusterID;nsid=1477914146;c=1684774616949), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:56:57,524 DEBUG [Listener at localhost/39615] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde 2023-05-22 16:56:57,527 INFO [Listener at localhost/39615] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/zookeeper_0, clientPort=64813, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-22 16:56:57,530 INFO [Listener at localhost/39615] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=64813 2023-05-22 16:56:57,530 INFO [Listener at localhost/39615] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:56:57,532 INFO [Listener at localhost/39615] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:56:57,551 INFO [Listener at localhost/39615] util.FSUtils(471): Created version file at hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a with version=8 2023-05-22 16:56:57,551 INFO [Listener at localhost/39615] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/hbase-staging 2023-05-22 16:56:57,554 INFO [Listener at localhost/39615] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-22 16:56:57,554 INFO [Listener at localhost/39615] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 16:56:57,554 INFO [Listener at localhost/39615] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-22 16:56:57,554 INFO [Listener at localhost/39615] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-22 16:56:57,555 INFO [Listener at localhost/39615] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 16:56:57,555 INFO [Listener at localhost/39615] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-22 16:56:57,555 INFO [Listener at localhost/39615] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-22 16:56:57,556 INFO [Listener at localhost/39615] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33463 2023-05-22 16:56:57,557 INFO [Listener at localhost/39615] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:56:57,558 INFO [Listener at localhost/39615] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:56:57,560 INFO [Listener at localhost/39615] zookeeper.RecoverableZooKeeper(93): Process identifier=master:33463 connecting to ZooKeeper ensemble=127.0.0.1:64813 2023-05-22 16:56:57,571 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:334630x0, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-22 16:56:57,572 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:33463-0x10053d304b90000 connected 2023-05-22 16:56:57,597 DEBUG [Listener at localhost/39615] zookeeper.ZKUtil(164): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-22 16:56:57,598 DEBUG [Listener at localhost/39615] zookeeper.ZKUtil(164): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 16:56:57,598 DEBUG [Listener at localhost/39615] zookeeper.ZKUtil(164): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-22 16:56:57,603 DEBUG [Listener at localhost/39615] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33463 2023-05-22 16:56:57,603 DEBUG [Listener at localhost/39615] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33463 2023-05-22 16:56:57,604 DEBUG [Listener at localhost/39615] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33463 2023-05-22 16:56:57,608 DEBUG [Listener at localhost/39615] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33463 2023-05-22 16:56:57,609 DEBUG [Listener at localhost/39615] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33463 2023-05-22 16:56:57,609 INFO [Listener at localhost/39615] master.HMaster(444): hbase.rootdir=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a, hbase.cluster.distributed=false 2023-05-22 16:56:57,625 INFO [Listener at localhost/39615] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-22 16:56:57,625 INFO [Listener at localhost/39615] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 16:56:57,626 INFO [Listener at localhost/39615] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-22 16:56:57,626 INFO [Listener at localhost/39615] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-22 16:56:57,626 INFO [Listener at localhost/39615] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 16:56:57,626 INFO [Listener at localhost/39615] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-22 16:56:57,626 INFO [Listener at localhost/39615] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-22 16:56:57,628 INFO [Listener at localhost/39615] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37447 2023-05-22 16:56:57,629 INFO [Listener at localhost/39615] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-22 16:56:57,631 DEBUG [Listener at localhost/39615] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-22 16:56:57,632 INFO [Listener at localhost/39615] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:56:57,634 INFO [Listener at localhost/39615] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:56:57,636 INFO [Listener at localhost/39615] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37447 connecting to ZooKeeper ensemble=127.0.0.1:64813 2023-05-22 16:56:57,641 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): regionserver:374470x0, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-22 16:56:57,642 DEBUG [Listener at localhost/39615] zookeeper.ZKUtil(164): regionserver:374470x0, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-22 16:56:57,642 DEBUG [Listener at localhost/39615] zookeeper.ZKUtil(164): regionserver:374470x0, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 16:56:57,643 DEBUG [Listener at localhost/39615] zookeeper.ZKUtil(164): regionserver:374470x0, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-22 16:56:57,647 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37447-0x10053d304b90001 connected 2023-05-22 16:56:57,647 DEBUG [Listener at localhost/39615] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37447 2023-05-22 16:56:57,647 DEBUG [Listener at localhost/39615] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37447 2023-05-22 16:56:57,650 DEBUG [Listener at localhost/39615] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37447 2023-05-22 16:56:57,654 DEBUG [Listener at localhost/39615] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37447 2023-05-22 16:56:57,658 DEBUG [Listener at localhost/39615] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37447 2023-05-22 16:56:57,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,33463,1684774617553 2023-05-22 16:56:57,666 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-22 16:56:57,667 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,33463,1684774617553 2023-05-22 16:56:57,668 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): regionserver:37447-0x10053d304b90001, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-22 16:56:57,668 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-22 16:56:57,669 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:56:57,669 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-22 16:56:57,670 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,33463,1684774617553 from backup master directory 2023-05-22 16:56:57,670 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-22 16:56:57,673 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,33463,1684774617553 2023-05-22 16:56:57,673 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-22 16:56:57,673 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-22 16:56:57,673 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,33463,1684774617553 2023-05-22 16:56:57,691 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/hbase.id with ID: 3ee63d7b-25c3-4a59-80b0-a897a789baf8 2023-05-22 16:56:57,704 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:56:57,707 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:56:57,718 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x2951b6d8 to 127.0.0.1:64813 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-22 16:56:57,722 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6799402b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-22 16:56:57,722 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-22 16:56:57,723 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-22 16:56:57,723 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-22 16:56:57,725 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/MasterData/data/master/store-tmp 2023-05-22 16:56:57,740 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:56:57,740 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-22 16:56:57,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:56:57,741 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:56:57,741 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-22 16:56:57,741 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:56:57,741 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:56:57,741 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-22 16:56:57,742 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/MasterData/WALs/jenkins-hbase4.apache.org,33463,1684774617553 2023-05-22 16:56:57,745 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33463%2C1684774617553, suffix=, logDir=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/MasterData/WALs/jenkins-hbase4.apache.org,33463,1684774617553, archiveDir=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/MasterData/oldWALs, maxLogs=10 2023-05-22 16:56:57,752 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/MasterData/WALs/jenkins-hbase4.apache.org,33463,1684774617553/jenkins-hbase4.apache.org%2C33463%2C1684774617553.1684774617745 2023-05-22 16:56:57,752 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK], DatanodeInfoWithStorage[127.0.0.1:43407,DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0,DISK]] 2023-05-22 16:56:57,753 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-22 16:56:57,753 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:56:57,753 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:56:57,753 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:56:57,755 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:56:57,756 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-22 16:56:57,757 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-22 16:56:57,757 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:56:57,758 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:56:57,759 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:56:57,762 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:56:57,764 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-22 16:56:57,765 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=812792, jitterRate=0.03351891040802002}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-22 16:56:57,765 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-22 16:56:57,765 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-22 16:56:57,766 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-22 16:56:57,766 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-22 16:56:57,767 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-22 16:56:57,767 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-22 16:56:57,768 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-22 16:56:57,768 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(95): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-22 16:56:57,769 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-22 16:56:57,770 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-22 16:56:57,782 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-22 16:56:57,782 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-22 16:56:57,783 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-22 16:56:57,783 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-22 16:56:57,784 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-22 16:56:57,786 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:56:57,787 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-22 16:56:57,787 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-22 16:56:57,788 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-22 16:56:57,791 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-22 16:56:57,791 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): regionserver:37447-0x10053d304b90001, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-22 16:56:57,791 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:56:57,791 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,33463,1684774617553, sessionid=0x10053d304b90000, setting cluster-up flag (Was=false) 2023-05-22 16:56:57,795 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:56:57,800 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-22 16:56:57,801 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,33463,1684774617553 2023-05-22 16:56:57,804 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:56:57,810 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-22 16:56:57,811 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,33463,1684774617553 2023-05-22 16:56:57,812 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/.hbase-snapshot/.tmp 2023-05-22 16:56:57,815 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-22 16:56:57,815 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-22 16:56:57,815 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-22 16:56:57,815 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-22 16:56:57,815 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-22 16:56:57,815 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-22 16:56:57,815 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:56:57,815 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-22 16:56:57,815 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:56:57,819 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1684774647819 2023-05-22 16:56:57,819 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-22 16:56:57,819 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-22 16:56:57,820 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-22 16:56:57,820 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-22 16:56:57,820 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-22 16:56:57,820 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-22 16:56:57,822 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-22 16:56:57,823 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-22 16:56:57,823 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-22 16:56:57,823 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-22 16:56:57,823 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-22 16:56:57,823 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-22 16:56:57,824 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-22 16:56:57,824 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-22 16:56:57,824 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1684774617824,5,FailOnTimeoutGroup] 2023-05-22 16:56:57,824 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1684774617824,5,FailOnTimeoutGroup] 2023-05-22 16:56:57,824 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-22 16:56:57,824 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-22 16:56:57,825 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-22 16:56:57,825 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-22 16:56:57,825 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-22 16:56:57,841 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-22 16:56:57,842 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-22 16:56:57,842 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a 2023-05-22 16:56:57,853 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:56:57,854 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-22 16:56:57,856 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/meta/1588230740/info 2023-05-22 16:56:57,857 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-22 16:56:57,857 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:56:57,857 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-22 16:56:57,859 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/meta/1588230740/rep_barrier 2023-05-22 16:56:57,859 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-22 16:56:57,860 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:56:57,860 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-22 16:56:57,861 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/meta/1588230740/table 2023-05-22 16:56:57,862 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-22 16:56:57,862 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:56:57,864 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/meta/1588230740 2023-05-22 16:56:57,864 INFO [RS:0;jenkins-hbase4:37447] regionserver.HRegionServer(951): ClusterId : 3ee63d7b-25c3-4a59-80b0-a897a789baf8 2023-05-22 16:56:57,864 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/meta/1588230740 2023-05-22 16:56:57,865 DEBUG [RS:0;jenkins-hbase4:37447] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-22 16:56:57,867 DEBUG [RS:0;jenkins-hbase4:37447] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-22 16:56:57,867 DEBUG [RS:0;jenkins-hbase4:37447] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-22 16:56:57,868 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-22 16:56:57,870 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-22 16:56:57,871 DEBUG [RS:0;jenkins-hbase4:37447] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-22 16:56:57,872 DEBUG [RS:0;jenkins-hbase4:37447] zookeeper.ReadOnlyZKClient(139): Connect 0x7fef4d1b to 127.0.0.1:64813 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-22 16:56:57,876 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-22 16:56:57,876 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=730677, jitterRate=-0.07089690864086151}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-22 16:56:57,876 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-22 16:56:57,877 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-22 16:56:57,877 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-22 16:56:57,877 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-22 16:56:57,877 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-22 16:56:57,877 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-22 16:56:57,877 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-22 16:56:57,878 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-22 16:56:57,878 DEBUG [RS:0;jenkins-hbase4:37447] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7128bc96, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-22 16:56:57,878 DEBUG [RS:0;jenkins-hbase4:37447] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7517266b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-22 16:56:57,879 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-22 16:56:57,880 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-22 16:56:57,880 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-22 16:56:57,882 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-22 16:56:57,883 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-22 16:56:57,890 DEBUG [RS:0;jenkins-hbase4:37447] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:37447 2023-05-22 16:56:57,890 INFO [RS:0;jenkins-hbase4:37447] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-22 16:56:57,891 INFO [RS:0;jenkins-hbase4:37447] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-22 16:56:57,891 DEBUG [RS:0;jenkins-hbase4:37447] regionserver.HRegionServer(1022): About to register with Master. 2023-05-22 16:56:57,891 INFO [RS:0;jenkins-hbase4:37447] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,33463,1684774617553 with isa=jenkins-hbase4.apache.org/172.31.14.131:37447, startcode=1684774617624 2023-05-22 16:56:57,892 DEBUG [RS:0;jenkins-hbase4:37447] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-22 16:56:57,895 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42643, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-05-22 16:56:57,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33463] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37447,1684774617624 2023-05-22 16:56:57,897 DEBUG [RS:0;jenkins-hbase4:37447] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a 2023-05-22 16:56:57,897 DEBUG [RS:0;jenkins-hbase4:37447] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36627 2023-05-22 16:56:57,897 DEBUG [RS:0;jenkins-hbase4:37447] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-22 16:56:57,899 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-22 16:56:57,899 DEBUG [RS:0;jenkins-hbase4:37447] zookeeper.ZKUtil(162): regionserver:37447-0x10053d304b90001, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37447,1684774617624 2023-05-22 16:56:57,899 WARN [RS:0;jenkins-hbase4:37447] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-22 16:56:57,899 INFO [RS:0;jenkins-hbase4:37447] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-22 16:56:57,900 DEBUG [RS:0;jenkins-hbase4:37447] regionserver.HRegionServer(1946): logDir=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624 2023-05-22 16:56:57,900 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37447,1684774617624] 2023-05-22 16:56:57,903 DEBUG [RS:0;jenkins-hbase4:37447] zookeeper.ZKUtil(162): regionserver:37447-0x10053d304b90001, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37447,1684774617624 2023-05-22 16:56:57,904 DEBUG [RS:0;jenkins-hbase4:37447] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-22 16:56:57,905 INFO [RS:0;jenkins-hbase4:37447] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-22 16:56:57,907 INFO [RS:0;jenkins-hbase4:37447] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-22 16:56:57,908 INFO [RS:0;jenkins-hbase4:37447] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-22 16:56:57,908 INFO [RS:0;jenkins-hbase4:37447] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-22 16:56:57,908 INFO [RS:0;jenkins-hbase4:37447] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-22 16:56:57,909 INFO [RS:0;jenkins-hbase4:37447] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-22 16:56:57,910 DEBUG [RS:0;jenkins-hbase4:37447] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:56:57,910 DEBUG [RS:0;jenkins-hbase4:37447] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:56:57,910 DEBUG [RS:0;jenkins-hbase4:37447] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:56:57,910 DEBUG [RS:0;jenkins-hbase4:37447] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:56:57,910 DEBUG [RS:0;jenkins-hbase4:37447] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:56:57,910 DEBUG [RS:0;jenkins-hbase4:37447] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-22 16:56:57,911 DEBUG [RS:0;jenkins-hbase4:37447] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:56:57,911 DEBUG [RS:0;jenkins-hbase4:37447] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:56:57,911 DEBUG [RS:0;jenkins-hbase4:37447] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:56:57,911 DEBUG [RS:0;jenkins-hbase4:37447] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:56:57,912 INFO [RS:0;jenkins-hbase4:37447] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-22 16:56:57,912 INFO [RS:0;jenkins-hbase4:37447] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-22 16:56:57,912 INFO [RS:0;jenkins-hbase4:37447] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-22 16:56:57,927 INFO [RS:0;jenkins-hbase4:37447] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-22 16:56:57,927 INFO [RS:0;jenkins-hbase4:37447] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37447,1684774617624-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-22 16:56:57,945 INFO [RS:0;jenkins-hbase4:37447] regionserver.Replication(203): jenkins-hbase4.apache.org,37447,1684774617624 started 2023-05-22 16:56:57,945 INFO [RS:0;jenkins-hbase4:37447] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37447,1684774617624, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37447, sessionid=0x10053d304b90001 2023-05-22 16:56:57,945 DEBUG [RS:0;jenkins-hbase4:37447] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-22 16:56:57,945 DEBUG [RS:0;jenkins-hbase4:37447] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37447,1684774617624 2023-05-22 16:56:57,945 DEBUG [RS:0;jenkins-hbase4:37447] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37447,1684774617624' 2023-05-22 16:56:57,945 DEBUG [RS:0;jenkins-hbase4:37447] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-22 16:56:57,946 DEBUG [RS:0;jenkins-hbase4:37447] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-22 16:56:57,947 DEBUG [RS:0;jenkins-hbase4:37447] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-22 16:56:57,947 DEBUG [RS:0;jenkins-hbase4:37447] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-22 16:56:57,947 DEBUG [RS:0;jenkins-hbase4:37447] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37447,1684774617624 2023-05-22 16:56:57,947 DEBUG [RS:0;jenkins-hbase4:37447] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37447,1684774617624' 2023-05-22 16:56:57,947 DEBUG [RS:0;jenkins-hbase4:37447] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-22 16:56:57,947 DEBUG [RS:0;jenkins-hbase4:37447] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-22 16:56:57,948 DEBUG [RS:0;jenkins-hbase4:37447] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-22 16:56:57,948 INFO [RS:0;jenkins-hbase4:37447] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-22 16:56:57,948 INFO [RS:0;jenkins-hbase4:37447] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-22 16:56:58,033 DEBUG [jenkins-hbase4:33463] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-22 16:56:58,034 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37447,1684774617624, state=OPENING 2023-05-22 16:56:58,036 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-22 16:56:58,039 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:56:58,039 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-22 16:56:58,039 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37447,1684774617624}] 2023-05-22 16:56:58,050 INFO [RS:0;jenkins-hbase4:37447] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37447%2C1684774617624, suffix=, logDir=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624, archiveDir=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/oldWALs, maxLogs=32 2023-05-22 16:56:58,065 INFO [RS:0;jenkins-hbase4:37447] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774618052 2023-05-22 16:56:58,065 DEBUG [RS:0;jenkins-hbase4:37447] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43407,DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0,DISK], DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK]] 2023-05-22 16:56:58,194 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37447,1684774617624 2023-05-22 16:56:58,194 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-22 16:56:58,197 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33186, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-22 16:56:58,202 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-22 16:56:58,202 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-22 16:56:58,204 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37447%2C1684774617624.meta, suffix=.meta, logDir=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624, archiveDir=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/oldWALs, maxLogs=32 2023-05-22 16:56:58,215 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.meta.1684774618206.meta 2023-05-22 16:56:58,215 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43407,DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0,DISK], DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK]] 2023-05-22 16:56:58,215 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-22 16:56:58,216 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-22 16:56:58,216 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-22 16:56:58,216 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-22 16:56:58,216 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-22 16:56:58,216 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:56:58,217 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-22 16:56:58,217 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-22 16:56:58,218 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-22 16:56:58,219 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/meta/1588230740/info 2023-05-22 16:56:58,219 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/meta/1588230740/info 2023-05-22 16:56:58,220 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-22 16:56:58,221 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:56:58,221 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-22 16:56:58,222 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/meta/1588230740/rep_barrier 2023-05-22 16:56:58,222 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/meta/1588230740/rep_barrier 2023-05-22 16:56:58,222 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-22 16:56:58,223 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:56:58,223 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-22 16:56:58,224 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/meta/1588230740/table 2023-05-22 16:56:58,224 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/meta/1588230740/table 2023-05-22 16:56:58,225 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-22 16:56:58,226 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:56:58,227 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/meta/1588230740 2023-05-22 16:56:58,229 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/meta/1588230740 2023-05-22 16:56:58,231 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-22 16:56:58,233 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-22 16:56:58,234 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=750287, jitterRate=-0.04596152901649475}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-22 16:56:58,234 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-22 16:56:58,235 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1684774618194 2023-05-22 16:56:58,239 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-22 16:56:58,239 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-22 16:56:58,240 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37447,1684774617624, state=OPEN 2023-05-22 16:56:58,242 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-22 16:56:58,242 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-22 16:56:58,245 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-22 16:56:58,245 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37447,1684774617624 in 203 msec 2023-05-22 16:56:58,248 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-22 16:56:58,248 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 365 msec 2023-05-22 16:56:58,250 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 436 msec 2023-05-22 16:56:58,251 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1684774618251, completionTime=-1 2023-05-22 16:56:58,251 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-22 16:56:58,251 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-22 16:56:58,253 DEBUG [hconnection-0x4343490c-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-22 16:56:58,255 INFO [RS-EventLoopGroup-6-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33198, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-22 16:56:58,257 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-22 16:56:58,257 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1684774678257 2023-05-22 16:56:58,257 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1684774738257 2023-05-22 16:56:58,257 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-22 16:56:58,263 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33463,1684774617553-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-22 16:56:58,263 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33463,1684774617553-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-22 16:56:58,263 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33463,1684774617553-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-22 16:56:58,263 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:33463, period=300000, unit=MILLISECONDS is enabled. 2023-05-22 16:56:58,263 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-22 16:56:58,263 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-22 16:56:58,264 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-22 16:56:58,265 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-22 16:56:58,265 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-22 16:56:58,267 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-22 16:56:58,268 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-22 16:56:58,270 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/.tmp/data/hbase/namespace/5012e6cb0c696799837306c620b4ef17 2023-05-22 16:56:58,270 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/.tmp/data/hbase/namespace/5012e6cb0c696799837306c620b4ef17 empty. 2023-05-22 16:56:58,271 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/.tmp/data/hbase/namespace/5012e6cb0c696799837306c620b4ef17 2023-05-22 16:56:58,271 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-22 16:56:58,284 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-22 16:56:58,285 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5012e6cb0c696799837306c620b4ef17, NAME => 'hbase:namespace,,1684774618263.5012e6cb0c696799837306c620b4ef17.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/.tmp 2023-05-22 16:56:58,296 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684774618263.5012e6cb0c696799837306c620b4ef17.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:56:58,296 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 5012e6cb0c696799837306c620b4ef17, disabling compactions & flushes 2023-05-22 16:56:58,296 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684774618263.5012e6cb0c696799837306c620b4ef17. 2023-05-22 16:56:58,296 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684774618263.5012e6cb0c696799837306c620b4ef17. 2023-05-22 16:56:58,296 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684774618263.5012e6cb0c696799837306c620b4ef17. after waiting 0 ms 2023-05-22 16:56:58,296 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684774618263.5012e6cb0c696799837306c620b4ef17. 2023-05-22 16:56:58,296 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684774618263.5012e6cb0c696799837306c620b4ef17. 2023-05-22 16:56:58,296 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 5012e6cb0c696799837306c620b4ef17: 2023-05-22 16:56:58,300 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-22 16:56:58,301 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1684774618263.5012e6cb0c696799837306c620b4ef17.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684774618301"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684774618301"}]},"ts":"1684774618301"} 2023-05-22 16:56:58,304 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-22 16:56:58,305 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-22 16:56:58,305 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684774618305"}]},"ts":"1684774618305"} 2023-05-22 16:56:58,307 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-22 16:56:58,314 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=5012e6cb0c696799837306c620b4ef17, ASSIGN}] 2023-05-22 16:56:58,315 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=5012e6cb0c696799837306c620b4ef17, ASSIGN 2023-05-22 16:56:58,316 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=5012e6cb0c696799837306c620b4ef17, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37447,1684774617624; forceNewPlan=false, retain=false 2023-05-22 16:56:58,467 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=5012e6cb0c696799837306c620b4ef17, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37447,1684774617624 2023-05-22 16:56:58,468 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1684774618263.5012e6cb0c696799837306c620b4ef17.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684774618467"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1684774618467"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684774618467"}]},"ts":"1684774618467"} 2023-05-22 16:56:58,470 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 5012e6cb0c696799837306c620b4ef17, server=jenkins-hbase4.apache.org,37447,1684774617624}] 2023-05-22 16:56:58,628 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1684774618263.5012e6cb0c696799837306c620b4ef17. 2023-05-22 16:56:58,628 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5012e6cb0c696799837306c620b4ef17, NAME => 'hbase:namespace,,1684774618263.5012e6cb0c696799837306c620b4ef17.', STARTKEY => '', ENDKEY => ''} 2023-05-22 16:56:58,629 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 5012e6cb0c696799837306c620b4ef17 2023-05-22 16:56:58,629 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684774618263.5012e6cb0c696799837306c620b4ef17.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:56:58,629 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5012e6cb0c696799837306c620b4ef17 2023-05-22 16:56:58,629 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5012e6cb0c696799837306c620b4ef17 2023-05-22 16:56:58,630 INFO [StoreOpener-5012e6cb0c696799837306c620b4ef17-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 5012e6cb0c696799837306c620b4ef17 2023-05-22 16:56:58,632 DEBUG [StoreOpener-5012e6cb0c696799837306c620b4ef17-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/namespace/5012e6cb0c696799837306c620b4ef17/info 2023-05-22 16:56:58,632 DEBUG [StoreOpener-5012e6cb0c696799837306c620b4ef17-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/namespace/5012e6cb0c696799837306c620b4ef17/info 2023-05-22 16:56:58,632 INFO [StoreOpener-5012e6cb0c696799837306c620b4ef17-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5012e6cb0c696799837306c620b4ef17 columnFamilyName info 2023-05-22 16:56:58,633 INFO [StoreOpener-5012e6cb0c696799837306c620b4ef17-1] regionserver.HStore(310): Store=5012e6cb0c696799837306c620b4ef17/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:56:58,634 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/namespace/5012e6cb0c696799837306c620b4ef17 2023-05-22 16:56:58,635 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/namespace/5012e6cb0c696799837306c620b4ef17 2023-05-22 16:56:58,639 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5012e6cb0c696799837306c620b4ef17 2023-05-22 16:56:58,642 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/namespace/5012e6cb0c696799837306c620b4ef17/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-22 16:56:58,642 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5012e6cb0c696799837306c620b4ef17; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=725110, jitterRate=-0.07797582447528839}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-22 16:56:58,642 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5012e6cb0c696799837306c620b4ef17: 2023-05-22 16:56:58,644 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1684774618263.5012e6cb0c696799837306c620b4ef17., pid=6, masterSystemTime=1684774618623 2023-05-22 16:56:58,647 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1684774618263.5012e6cb0c696799837306c620b4ef17. 2023-05-22 16:56:58,647 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1684774618263.5012e6cb0c696799837306c620b4ef17. 2023-05-22 16:56:58,647 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=5012e6cb0c696799837306c620b4ef17, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37447,1684774617624 2023-05-22 16:56:58,648 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1684774618263.5012e6cb0c696799837306c620b4ef17.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684774618647"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1684774618647"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684774618647"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684774618647"}]},"ts":"1684774618647"} 2023-05-22 16:56:58,653 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-22 16:56:58,653 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 5012e6cb0c696799837306c620b4ef17, server=jenkins-hbase4.apache.org,37447,1684774617624 in 180 msec 2023-05-22 16:56:58,655 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-22 16:56:58,656 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=5012e6cb0c696799837306c620b4ef17, ASSIGN in 340 msec 2023-05-22 16:56:58,657 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-22 16:56:58,657 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684774618657"}]},"ts":"1684774618657"} 2023-05-22 16:56:58,658 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-22 16:56:58,661 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-22 16:56:58,663 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 397 msec 2023-05-22 16:56:58,666 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-22 16:56:58,669 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-22 16:56:58,669 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:56:58,673 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-22 16:56:58,681 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-22 16:56:58,685 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-05-22 16:56:58,695 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-22 16:56:58,702 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-22 16:56:58,707 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-05-22 16:56:58,719 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-22 16:56:58,722 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-22 16:56:58,722 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.049sec 2023-05-22 16:56:58,722 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-22 16:56:58,723 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-22 16:56:58,723 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-22 16:56:58,723 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33463,1684774617553-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-22 16:56:58,723 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33463,1684774617553-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-22 16:56:58,725 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-22 16:56:58,764 DEBUG [Listener at localhost/39615] zookeeper.ReadOnlyZKClient(139): Connect 0x167f49d9 to 127.0.0.1:64813 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-22 16:56:58,769 DEBUG [Listener at localhost/39615] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2617e6f5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-22 16:56:58,770 DEBUG [hconnection-0x6cc17cca-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-22 16:56:58,772 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33208, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-22 16:56:58,774 INFO [Listener at localhost/39615] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,33463,1684774617553 2023-05-22 16:56:58,774 INFO [Listener at localhost/39615] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:56:58,778 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-22 16:56:58,778 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:56:58,779 INFO [Listener at localhost/39615] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-22 16:56:58,791 INFO [Listener at localhost/39615] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-22 16:56:58,791 INFO [Listener at localhost/39615] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 16:56:58,791 INFO [Listener at localhost/39615] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-22 16:56:58,791 INFO [Listener at localhost/39615] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-22 16:56:58,791 INFO [Listener at localhost/39615] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 16:56:58,791 INFO [Listener at localhost/39615] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-22 16:56:58,792 INFO [Listener at localhost/39615] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-22 16:56:58,793 INFO [Listener at localhost/39615] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46747 2023-05-22 16:56:58,793 INFO [Listener at localhost/39615] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-22 16:56:58,794 DEBUG [Listener at localhost/39615] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-22 16:56:58,794 INFO [Listener at localhost/39615] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:56:58,795 INFO [Listener at localhost/39615] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:56:58,796 INFO [Listener at localhost/39615] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46747 connecting to ZooKeeper ensemble=127.0.0.1:64813 2023-05-22 16:56:58,800 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): regionserver:467470x0, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-22 16:56:58,801 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46747-0x10053d304b90005 connected 2023-05-22 16:56:58,801 DEBUG [Listener at localhost/39615] zookeeper.ZKUtil(162): regionserver:46747-0x10053d304b90005, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-22 16:56:58,802 DEBUG [Listener at localhost/39615] zookeeper.ZKUtil(162): regionserver:46747-0x10053d304b90005, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-05-22 16:56:58,803 DEBUG [Listener at localhost/39615] zookeeper.ZKUtil(164): regionserver:46747-0x10053d304b90005, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-22 16:56:58,803 DEBUG [Listener at localhost/39615] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46747 2023-05-22 16:56:58,803 DEBUG [Listener at localhost/39615] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46747 2023-05-22 16:56:58,805 DEBUG [Listener at localhost/39615] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46747 2023-05-22 16:56:58,805 DEBUG [Listener at localhost/39615] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46747 2023-05-22 16:56:58,805 DEBUG [Listener at localhost/39615] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46747 2023-05-22 16:56:58,807 INFO [RS:1;jenkins-hbase4:46747] regionserver.HRegionServer(951): ClusterId : 3ee63d7b-25c3-4a59-80b0-a897a789baf8 2023-05-22 16:56:58,808 DEBUG [RS:1;jenkins-hbase4:46747] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-22 16:56:58,810 DEBUG [RS:1;jenkins-hbase4:46747] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-22 16:56:58,810 DEBUG [RS:1;jenkins-hbase4:46747] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-22 16:56:58,812 DEBUG [RS:1;jenkins-hbase4:46747] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-22 16:56:58,813 DEBUG [RS:1;jenkins-hbase4:46747] zookeeper.ReadOnlyZKClient(139): Connect 0x6c08d862 to 127.0.0.1:64813 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-22 16:56:58,817 DEBUG [RS:1;jenkins-hbase4:46747] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@62cf3b93, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-22 16:56:58,817 DEBUG [RS:1;jenkins-hbase4:46747] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@66f0b312, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-22 16:56:58,826 DEBUG [RS:1;jenkins-hbase4:46747] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:46747 2023-05-22 16:56:58,826 INFO [RS:1;jenkins-hbase4:46747] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-22 16:56:58,826 INFO [RS:1;jenkins-hbase4:46747] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-22 16:56:58,826 DEBUG [RS:1;jenkins-hbase4:46747] regionserver.HRegionServer(1022): About to register with Master. 2023-05-22 16:56:58,827 INFO [RS:1;jenkins-hbase4:46747] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,33463,1684774617553 with isa=jenkins-hbase4.apache.org/172.31.14.131:46747, startcode=1684774618790 2023-05-22 16:56:58,827 DEBUG [RS:1;jenkins-hbase4:46747] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-22 16:56:58,830 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33447, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-05-22 16:56:58,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33463] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46747,1684774618790 2023-05-22 16:56:58,831 DEBUG [RS:1;jenkins-hbase4:46747] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a 2023-05-22 16:56:58,831 DEBUG [RS:1;jenkins-hbase4:46747] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36627 2023-05-22 16:56:58,831 DEBUG [RS:1;jenkins-hbase4:46747] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-22 16:56:58,833 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): regionserver:37447-0x10053d304b90001, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-22 16:56:58,834 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-22 16:56:58,834 DEBUG [RS:1;jenkins-hbase4:46747] zookeeper.ZKUtil(162): regionserver:46747-0x10053d304b90005, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46747,1684774618790 2023-05-22 16:56:58,834 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46747,1684774618790] 2023-05-22 16:56:58,834 WARN [RS:1;jenkins-hbase4:46747] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-22 16:56:58,834 INFO [RS:1;jenkins-hbase4:46747] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-22 16:56:58,834 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37447-0x10053d304b90001, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37447,1684774617624 2023-05-22 16:56:58,834 DEBUG [RS:1;jenkins-hbase4:46747] regionserver.HRegionServer(1946): logDir=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,46747,1684774618790 2023-05-22 16:56:58,835 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37447-0x10053d304b90001, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46747,1684774618790 2023-05-22 16:56:58,839 DEBUG [RS:1;jenkins-hbase4:46747] zookeeper.ZKUtil(162): regionserver:46747-0x10053d304b90005, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37447,1684774617624 2023-05-22 16:56:58,839 DEBUG [RS:1;jenkins-hbase4:46747] zookeeper.ZKUtil(162): regionserver:46747-0x10053d304b90005, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46747,1684774618790 2023-05-22 16:56:58,840 DEBUG [RS:1;jenkins-hbase4:46747] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-22 16:56:58,841 INFO [RS:1;jenkins-hbase4:46747] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-22 16:56:58,887 INFO [RS:1;jenkins-hbase4:46747] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-22 16:56:58,888 INFO [RS:1;jenkins-hbase4:46747] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-22 16:56:58,888 INFO [RS:1;jenkins-hbase4:46747] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-22 16:56:58,889 INFO [RS:1;jenkins-hbase4:46747] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-22 16:56:58,890 INFO [RS:1;jenkins-hbase4:46747] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-22 16:56:58,890 DEBUG [RS:1;jenkins-hbase4:46747] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:56:58,890 DEBUG [RS:1;jenkins-hbase4:46747] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:56:58,890 DEBUG [RS:1;jenkins-hbase4:46747] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:56:58,890 DEBUG [RS:1;jenkins-hbase4:46747] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:56:58,890 DEBUG [RS:1;jenkins-hbase4:46747] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:56:58,891 DEBUG [RS:1;jenkins-hbase4:46747] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-22 16:56:58,891 DEBUG [RS:1;jenkins-hbase4:46747] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:56:58,891 DEBUG [RS:1;jenkins-hbase4:46747] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:56:58,891 DEBUG [RS:1;jenkins-hbase4:46747] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:56:58,891 DEBUG [RS:1;jenkins-hbase4:46747] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:56:58,892 INFO [RS:1;jenkins-hbase4:46747] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-22 16:56:58,892 INFO [RS:1;jenkins-hbase4:46747] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-22 16:56:58,892 INFO [RS:1;jenkins-hbase4:46747] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-22 16:56:58,903 INFO [RS:1;jenkins-hbase4:46747] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-22 16:56:58,903 INFO [RS:1;jenkins-hbase4:46747] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46747,1684774618790-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-22 16:56:58,913 INFO [RS:1;jenkins-hbase4:46747] regionserver.Replication(203): jenkins-hbase4.apache.org,46747,1684774618790 started 2023-05-22 16:56:58,913 INFO [RS:1;jenkins-hbase4:46747] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46747,1684774618790, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46747, sessionid=0x10053d304b90005 2023-05-22 16:56:58,914 INFO [Listener at localhost/39615] hbase.HBaseTestingUtility(3254): Started new server=Thread[RS:1;jenkins-hbase4:46747,5,FailOnTimeoutGroup] 2023-05-22 16:56:58,914 DEBUG [RS:1;jenkins-hbase4:46747] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-22 16:56:58,914 INFO [Listener at localhost/39615] wal.TestLogRolling(323): Replication=2 2023-05-22 16:56:58,914 DEBUG [RS:1;jenkins-hbase4:46747] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46747,1684774618790 2023-05-22 16:56:58,914 DEBUG [RS:1;jenkins-hbase4:46747] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46747,1684774618790' 2023-05-22 16:56:58,915 DEBUG [RS:1;jenkins-hbase4:46747] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-22 16:56:58,915 DEBUG [RS:1;jenkins-hbase4:46747] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-22 16:56:58,917 DEBUG [Listener at localhost/39615] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-22 16:56:58,917 DEBUG [RS:1;jenkins-hbase4:46747] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-22 16:56:58,917 DEBUG [RS:1;jenkins-hbase4:46747] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-22 16:56:58,917 DEBUG [RS:1;jenkins-hbase4:46747] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46747,1684774618790 2023-05-22 16:56:58,917 DEBUG [RS:1;jenkins-hbase4:46747] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46747,1684774618790' 2023-05-22 16:56:58,917 DEBUG [RS:1;jenkins-hbase4:46747] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-22 16:56:58,918 DEBUG [RS:1;jenkins-hbase4:46747] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-22 16:56:58,919 DEBUG [RS:1;jenkins-hbase4:46747] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-22 16:56:58,919 INFO [RS:1;jenkins-hbase4:46747] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-22 16:56:58,919 INFO [RS:1;jenkins-hbase4:46747] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-22 16:56:58,920 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51134, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-22 16:56:58,922 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33463] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-22 16:56:58,922 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33463] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-22 16:56:58,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33463] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-22 16:56:58,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33463] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath 2023-05-22 16:56:58,926 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_PRE_OPERATION 2023-05-22 16:56:58,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33463] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnDatanodeDeath" procId is: 9 2023-05-22 16:56:58,927 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-22 16:56:58,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33463] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-22 16:56:58,929 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/036fee999d8e5493ac953b8d20489665 2023-05-22 16:56:58,930 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/036fee999d8e5493ac953b8d20489665 empty. 2023-05-22 16:56:58,930 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/036fee999d8e5493ac953b8d20489665 2023-05-22 16:56:58,930 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnDatanodeDeath regions 2023-05-22 16:56:58,946 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/.tabledesc/.tableinfo.0000000001 2023-05-22 16:56:58,947 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(7675): creating {ENCODED => 036fee999d8e5493ac953b8d20489665, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/.tmp 2023-05-22 16:56:58,957 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:56:58,957 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1604): Closing 036fee999d8e5493ac953b8d20489665, disabling compactions & flushes 2023-05-22 16:56:58,958 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665. 2023-05-22 16:56:58,958 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665. 2023-05-22 16:56:58,958 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665. after waiting 0 ms 2023-05-22 16:56:58,958 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665. 2023-05-22 16:56:58,958 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665. 2023-05-22 16:56:58,958 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1558): Region close journal for 036fee999d8e5493ac953b8d20489665: 2023-05-22 16:56:58,961 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ADD_TO_META 2023-05-22 16:56:58,962 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1684774618962"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684774618962"}]},"ts":"1684774618962"} 2023-05-22 16:56:58,964 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-22 16:56:58,965 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-22 16:56:58,965 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684774618965"}]},"ts":"1684774618965"} 2023-05-22 16:56:58,967 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLING in hbase:meta 2023-05-22 16:56:58,974 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-05-22 16:56:58,976 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-05-22 16:56:58,976 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-05-22 16:56:58,976 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-05-22 16:56:58,976 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=036fee999d8e5493ac953b8d20489665, ASSIGN}] 2023-05-22 16:56:58,978 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=036fee999d8e5493ac953b8d20489665, ASSIGN 2023-05-22 16:56:58,979 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=036fee999d8e5493ac953b8d20489665, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37447,1684774617624; forceNewPlan=false, retain=false 2023-05-22 16:56:59,022 INFO [RS:1;jenkins-hbase4:46747] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46747%2C1684774618790, suffix=, logDir=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,46747,1684774618790, archiveDir=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/oldWALs, maxLogs=32 2023-05-22 16:56:59,033 INFO [RS:1;jenkins-hbase4:46747] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,46747,1684774618790/jenkins-hbase4.apache.org%2C46747%2C1684774618790.1684774619023 2023-05-22 16:56:59,033 DEBUG [RS:1;jenkins-hbase4:46747] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK], DatanodeInfoWithStorage[127.0.0.1:43407,DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0,DISK]] 2023-05-22 16:56:59,132 INFO [jenkins-hbase4:33463] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-05-22 16:56:59,133 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=036fee999d8e5493ac953b8d20489665, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37447,1684774617624 2023-05-22 16:56:59,133 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1684774619133"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1684774619133"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684774619133"}]},"ts":"1684774619133"} 2023-05-22 16:56:59,135 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 036fee999d8e5493ac953b8d20489665, server=jenkins-hbase4.apache.org,37447,1684774617624}] 2023-05-22 16:56:59,293 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665. 2023-05-22 16:56:59,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 036fee999d8e5493ac953b8d20489665, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665.', STARTKEY => '', ENDKEY => ''} 2023-05-22 16:56:59,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnDatanodeDeath 036fee999d8e5493ac953b8d20489665 2023-05-22 16:56:59,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:56:59,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 036fee999d8e5493ac953b8d20489665 2023-05-22 16:56:59,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 036fee999d8e5493ac953b8d20489665 2023-05-22 16:56:59,296 INFO [StoreOpener-036fee999d8e5493ac953b8d20489665-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 036fee999d8e5493ac953b8d20489665 2023-05-22 16:56:59,297 DEBUG [StoreOpener-036fee999d8e5493ac953b8d20489665-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/036fee999d8e5493ac953b8d20489665/info 2023-05-22 16:56:59,297 DEBUG [StoreOpener-036fee999d8e5493ac953b8d20489665-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/036fee999d8e5493ac953b8d20489665/info 2023-05-22 16:56:59,298 INFO [StoreOpener-036fee999d8e5493ac953b8d20489665-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 036fee999d8e5493ac953b8d20489665 columnFamilyName info 2023-05-22 16:56:59,298 INFO [StoreOpener-036fee999d8e5493ac953b8d20489665-1] regionserver.HStore(310): Store=036fee999d8e5493ac953b8d20489665/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:56:59,300 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/036fee999d8e5493ac953b8d20489665 2023-05-22 16:56:59,300 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/036fee999d8e5493ac953b8d20489665 2023-05-22 16:56:59,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 036fee999d8e5493ac953b8d20489665 2023-05-22 16:56:59,306 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/036fee999d8e5493ac953b8d20489665/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-22 16:56:59,307 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 036fee999d8e5493ac953b8d20489665; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=790575, jitterRate=0.005268216133117676}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-22 16:56:59,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 036fee999d8e5493ac953b8d20489665: 2023-05-22 16:56:59,308 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665., pid=11, masterSystemTime=1684774619288 2023-05-22 16:56:59,309 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665. 2023-05-22 16:56:59,310 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665. 2023-05-22 16:56:59,310 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=036fee999d8e5493ac953b8d20489665, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37447,1684774617624 2023-05-22 16:56:59,311 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1684774619310"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1684774619310"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684774619310"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684774619310"}]},"ts":"1684774619310"} 2023-05-22 16:56:59,316 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-22 16:56:59,316 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 036fee999d8e5493ac953b8d20489665, server=jenkins-hbase4.apache.org,37447,1684774617624 in 178 msec 2023-05-22 16:56:59,318 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-22 16:56:59,319 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=036fee999d8e5493ac953b8d20489665, ASSIGN in 340 msec 2023-05-22 16:56:59,320 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-22 16:56:59,320 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684774619320"}]},"ts":"1684774619320"} 2023-05-22 16:56:59,322 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLED in hbase:meta 2023-05-22 16:56:59,324 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_POST_OPERATION 2023-05-22 16:56:59,326 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath in 402 msec 2023-05-22 16:57:01,300 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-22 16:57:03,905 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-22 16:57:03,905 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-22 16:57:03,906 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnDatanodeDeath' 2023-05-22 16:57:08,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33463] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-22 16:57:08,929 INFO [Listener at localhost/39615] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnDatanodeDeath, procId: 9 completed 2023-05-22 16:57:08,933 DEBUG [Listener at localhost/39615] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnDatanodeDeath 2023-05-22 16:57:08,933 DEBUG [Listener at localhost/39615] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665. 2023-05-22 16:57:08,948 WARN [Listener at localhost/39615] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-22 16:57:08,951 WARN [Listener at localhost/39615] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 16:57:08,952 INFO [Listener at localhost/39615] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 16:57:08,958 INFO [Listener at localhost/39615] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/java.io.tmpdir/Jetty_localhost_35907_datanode____.33hs57/webapp 2023-05-22 16:57:09,061 INFO [Listener at localhost/39615] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35907 2023-05-22 16:57:09,072 WARN [Listener at localhost/46865] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 16:57:09,101 WARN [Listener at localhost/46865] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-22 16:57:09,104 WARN [Listener at localhost/46865] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 16:57:09,105 INFO [Listener at localhost/46865] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 16:57:09,108 INFO [Listener at localhost/46865] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/java.io.tmpdir/Jetty_localhost_34331_datanode____1ugomv/webapp 2023-05-22 16:57:09,173 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9e060d5208067125: Processing first storage report for DS-7b42e2f5-3c93-4413-87ec-2248766244c0 from datanode 1c8de4ca-4fe7-49e8-bfb5-c9194e531828 2023-05-22 16:57:09,173 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9e060d5208067125: from storage DS-7b42e2f5-3c93-4413-87ec-2248766244c0 node DatanodeRegistration(127.0.0.1:46603, datanodeUuid=1c8de4ca-4fe7-49e8-bfb5-c9194e531828, infoPort=35065, infoSecurePort=0, ipcPort=46865, storageInfo=lv=-57;cid=testClusterID;nsid=1477914146;c=1684774616949), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:57:09,173 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9e060d5208067125: Processing first storage report for DS-7585e53b-651e-41bb-b3e4-878ecfe5f7b3 from datanode 1c8de4ca-4fe7-49e8-bfb5-c9194e531828 2023-05-22 16:57:09,173 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9e060d5208067125: from storage DS-7585e53b-651e-41bb-b3e4-878ecfe5f7b3 node DatanodeRegistration(127.0.0.1:46603, datanodeUuid=1c8de4ca-4fe7-49e8-bfb5-c9194e531828, infoPort=35065, infoSecurePort=0, ipcPort=46865, storageInfo=lv=-57;cid=testClusterID;nsid=1477914146;c=1684774616949), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:57:09,207 INFO [Listener at localhost/46865] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34331 2023-05-22 16:57:09,214 WARN [Listener at localhost/40247] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 16:57:09,230 WARN [Listener at localhost/40247] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-22 16:57:09,232 WARN [Listener at localhost/40247] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 16:57:09,233 INFO [Listener at localhost/40247] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 16:57:09,242 INFO [Listener at localhost/40247] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/java.io.tmpdir/Jetty_localhost_38795_datanode____.1ruyh7/webapp 2023-05-22 16:57:09,310 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xedb7f8b9a327f5e4: Processing first storage report for DS-af0bcc6d-85da-4f9f-9398-e5b2344fb281 from datanode 245e6e26-4475-49a6-aa40-9411ec7cfd9a 2023-05-22 16:57:09,310 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xedb7f8b9a327f5e4: from storage DS-af0bcc6d-85da-4f9f-9398-e5b2344fb281 node DatanodeRegistration(127.0.0.1:39893, datanodeUuid=245e6e26-4475-49a6-aa40-9411ec7cfd9a, infoPort=45725, infoSecurePort=0, ipcPort=40247, storageInfo=lv=-57;cid=testClusterID;nsid=1477914146;c=1684774616949), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:57:09,310 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xedb7f8b9a327f5e4: Processing first storage report for DS-f95def01-c62e-4f6a-b334-3c446fea356a from datanode 245e6e26-4475-49a6-aa40-9411ec7cfd9a 2023-05-22 16:57:09,310 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xedb7f8b9a327f5e4: from storage DS-f95def01-c62e-4f6a-b334-3c446fea356a node DatanodeRegistration(127.0.0.1:39893, datanodeUuid=245e6e26-4475-49a6-aa40-9411ec7cfd9a, infoPort=45725, infoSecurePort=0, ipcPort=40247, storageInfo=lv=-57;cid=testClusterID;nsid=1477914146;c=1684774616949), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:57:09,342 INFO [Listener at localhost/40247] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38795 2023-05-22 16:57:09,350 WARN [Listener at localhost/42253] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 16:57:09,440 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xef918bfaaa366de1: Processing first storage report for DS-d7f31b83-89b7-44b7-b755-15febafc3132 from datanode 40e7804b-e0d7-44ab-b2e3-58bf067a092f 2023-05-22 16:57:09,440 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xef918bfaaa366de1: from storage DS-d7f31b83-89b7-44b7-b755-15febafc3132 node DatanodeRegistration(127.0.0.1:44683, datanodeUuid=40e7804b-e0d7-44ab-b2e3-58bf067a092f, infoPort=42053, infoSecurePort=0, ipcPort=42253, storageInfo=lv=-57;cid=testClusterID;nsid=1477914146;c=1684774616949), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-22 16:57:09,440 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xef918bfaaa366de1: Processing first storage report for DS-b4dcbe7d-ab43-440b-ac8d-0b70a34deebb from datanode 40e7804b-e0d7-44ab-b2e3-58bf067a092f 2023-05-22 16:57:09,440 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xef918bfaaa366de1: from storage DS-b4dcbe7d-ab43-440b-ac8d-0b70a34deebb node DatanodeRegistration(127.0.0.1:44683, datanodeUuid=40e7804b-e0d7-44ab-b2e3-58bf067a092f, infoPort=42053, infoSecurePort=0, ipcPort=42253, storageInfo=lv=-57;cid=testClusterID;nsid=1477914146;c=1684774616949), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:57:09,457 WARN [Listener at localhost/42253] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-22 16:57:09,458 WARN [ResponseProcessor for block BP-705912352-172.31.14.131-1684774616949:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-705912352-172.31.14.131-1684774616949:blk_1073741832_1008 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-22 16:57:09,459 WARN [ResponseProcessor for block BP-705912352-172.31.14.131-1684774616949:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-705912352-172.31.14.131-1684774616949:blk_1073741833_1009 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-22 16:57:09,460 WARN [DataStreamer for file /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774618052 block BP-705912352-172.31.14.131-1684774616949:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-705912352-172.31.14.131-1684774616949:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:43407,DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0,DISK], DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:43407,DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0,DISK]) is bad. 2023-05-22 16:57:09,461 WARN [DataStreamer for file /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.meta.1684774618206.meta block BP-705912352-172.31.14.131-1684774616949:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-705912352-172.31.14.131-1684774616949:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:43407,DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0,DISK], DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:43407,DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0,DISK]) is bad. 2023-05-22 16:57:09,460 WARN [ResponseProcessor for block BP-705912352-172.31.14.131-1684774616949:blk_1073741838_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-705912352-172.31.14.131-1684774616949:blk_1073741838_1014 java.io.IOException: Bad response ERROR for BP-705912352-172.31.14.131-1684774616949:blk_1073741838_1014 from datanode DatanodeInfoWithStorage[127.0.0.1:43407,DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-22 16:57:09,461 WARN [ResponseProcessor for block BP-705912352-172.31.14.131-1684774616949:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-705912352-172.31.14.131-1684774616949:blk_1073741829_1005 java.io.IOException: Bad response ERROR for BP-705912352-172.31.14.131-1684774616949:blk_1073741829_1005 from datanode DatanodeInfoWithStorage[127.0.0.1:43407,DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-22 16:57:09,462 WARN [PacketResponder: BP-705912352-172.31.14.131-1684774616949:blk_1073741838_1014, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:43407]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:09,462 WARN [DataStreamer for file /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,46747,1684774618790/jenkins-hbase4.apache.org%2C46747%2C1684774618790.1684774619023 block BP-705912352-172.31.14.131-1684774616949:blk_1073741838_1014] hdfs.DataStreamer(1548): Error Recovery for BP-705912352-172.31.14.131-1684774616949:blk_1073741838_1014 in pipeline [DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK], DatanodeInfoWithStorage[127.0.0.1:43407,DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:43407,DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0,DISK]) is bad. 2023-05-22 16:57:09,467 WARN [DataStreamer for file /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/MasterData/WALs/jenkins-hbase4.apache.org,33463,1684774617553/jenkins-hbase4.apache.org%2C33463%2C1684774617553.1684774617745 block BP-705912352-172.31.14.131-1684774616949:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-705912352-172.31.14.131-1684774616949:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK], DatanodeInfoWithStorage[127.0.0.1:43407,DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:43407,DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0,DISK]) is bad. 2023-05-22 16:57:09,468 WARN [PacketResponder: BP-705912352-172.31.14.131-1684774616949:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:43407]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:09,470 INFO [Listener at localhost/42253] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 16:57:09,473 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-250436217_17 at /127.0.0.1:34068 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:41761:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34068 dst: /127.0.0.1:41761 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:09,473 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1359749407_17 at /127.0.0.1:33984 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:41761:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33984 dst: /127.0.0.1:41761 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:09,477 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:34010 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:41761:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34010 dst: /127.0.0.1:41761 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:41761 remote=/127.0.0.1:34010]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:09,478 WARN [PacketResponder: BP-705912352-172.31.14.131-1684774616949:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:41761]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:09,478 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:34018 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:41761:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34018 dst: /127.0.0.1:41761 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:41761 remote=/127.0.0.1:34018]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:09,479 WARN [PacketResponder: BP-705912352-172.31.14.131-1684774616949:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:41761]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:09,479 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:54568 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:43407:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54568 dst: /127.0.0.1:43407 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:09,482 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:54574 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:43407:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54574 dst: /127.0.0.1:43407 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:09,512 WARN [BP-705912352-172.31.14.131-1684774616949 heartbeating to localhost/127.0.0.1:36627] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-705912352-172.31.14.131-1684774616949 (Datanode Uuid 358b80bc-69af-4fb5-a138-b4360464ab8a) service to localhost/127.0.0.1:36627 2023-05-22 16:57:09,512 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data3/current/BP-705912352-172.31.14.131-1684774616949] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:57:09,513 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data4/current/BP-705912352-172.31.14.131-1684774616949] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:57:09,574 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-250436217_17 at /127.0.0.1:54626 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:43407:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54626 dst: /127.0.0.1:43407 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:09,574 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1359749407_17 at /127.0.0.1:54538 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:43407:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54538 dst: /127.0.0.1:43407 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:09,575 WARN [Listener at localhost/42253] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-22 16:57:09,576 WARN [ResponseProcessor for block BP-705912352-172.31.14.131-1684774616949:blk_1073741833_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-705912352-172.31.14.131-1684774616949:blk_1073741833_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-22 16:57:09,576 WARN [ResponseProcessor for block BP-705912352-172.31.14.131-1684774616949:blk_1073741832_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-705912352-172.31.14.131-1684774616949:blk_1073741832_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-22 16:57:09,576 WARN [ResponseProcessor for block BP-705912352-172.31.14.131-1684774616949:blk_1073741829_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-705912352-172.31.14.131-1684774616949:blk_1073741829_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-22 16:57:09,576 WARN [ResponseProcessor for block BP-705912352-172.31.14.131-1684774616949:blk_1073741838_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-705912352-172.31.14.131-1684774616949:blk_1073741838_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-22 16:57:09,580 INFO [Listener at localhost/42253] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 16:57:09,683 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1359749407_17 at /127.0.0.1:50170 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:41761:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50170 dst: /127.0.0.1:41761 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:09,684 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-250436217_17 at /127.0.0.1:50184 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:41761:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50184 dst: /127.0.0.1:41761 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:09,684 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:50186 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:41761:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50186 dst: /127.0.0.1:41761 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:09,683 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:50210 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:41761:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50210 dst: /127.0.0.1:41761 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:09,685 WARN [BP-705912352-172.31.14.131-1684774616949 heartbeating to localhost/127.0.0.1:36627] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-22 16:57:09,687 WARN [BP-705912352-172.31.14.131-1684774616949 heartbeating to localhost/127.0.0.1:36627] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-705912352-172.31.14.131-1684774616949 (Datanode Uuid bc7c7a9a-0e0c-454e-81f5-bead530557e5) service to localhost/127.0.0.1:36627 2023-05-22 16:57:09,688 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data1/current/BP-705912352-172.31.14.131-1684774616949] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:57:09,688 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data2/current/BP-705912352-172.31.14.131-1684774616949] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:57:09,694 WARN [RS:0;jenkins-hbase4:37447.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=4, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:57:09,695 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C37447%2C1684774617624:(num 1684774618052) roll requested 2023-05-22 16:57:09,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37447] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:57:09,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37447] ipc.CallRunner(144): callId: 9 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:33208 deadline: 1684774639693, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-05-22 16:57:09,700 WARN [Thread-629] hdfs.DataStreamer(1658): Abandoning BP-705912352-172.31.14.131-1684774616949:blk_1073741839_1019 2023-05-22 16:57:09,703 WARN [Thread-629] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:43407,DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0,DISK] 2023-05-22 16:57:09,711 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-05-22 16:57:09,711 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774618052 with entries=4, filesize=983 B; new WAL /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774629695 2023-05-22 16:57:09,713 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46603,DS-7b42e2f5-3c93-4413-87ec-2248766244c0,DISK], DatanodeInfoWithStorage[127.0.0.1:39893,DS-af0bcc6d-85da-4f9f-9398-e5b2344fb281,DISK]] 2023-05-22 16:57:09,713 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:57:09,713 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774618052 is not closed yet, will try archiving it next time 2023-05-22 16:57:09,713 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774618052; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:57:21,750 INFO [Listener at localhost/42253] wal.TestLogRolling(375): log.getCurrentFileName(): hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774629695 2023-05-22 16:57:21,751 WARN [Listener at localhost/42253] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-22 16:57:21,752 WARN [ResponseProcessor for block BP-705912352-172.31.14.131-1684774616949:blk_1073741840_1020] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-705912352-172.31.14.131-1684774616949:blk_1073741840_1020 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-22 16:57:21,752 WARN [DataStreamer for file /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774629695 block BP-705912352-172.31.14.131-1684774616949:blk_1073741840_1020] hdfs.DataStreamer(1548): Error Recovery for BP-705912352-172.31.14.131-1684774616949:blk_1073741840_1020 in pipeline [DatanodeInfoWithStorage[127.0.0.1:46603,DS-7b42e2f5-3c93-4413-87ec-2248766244c0,DISK], DatanodeInfoWithStorage[127.0.0.1:39893,DS-af0bcc6d-85da-4f9f-9398-e5b2344fb281,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:46603,DS-7b42e2f5-3c93-4413-87ec-2248766244c0,DISK]) is bad. 2023-05-22 16:57:21,757 INFO [Listener at localhost/42253] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 16:57:21,758 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:36068 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741840_1020]] datanode.DataXceiver(323): 127.0.0.1:39893:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:36068 dst: /127.0.0.1:39893 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:39893 remote=/127.0.0.1:36068]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:21,759 WARN [PacketResponder: BP-705912352-172.31.14.131-1684774616949:blk_1073741840_1020, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:39893]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:21,760 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:54776 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741840_1020]] datanode.DataXceiver(323): 127.0.0.1:46603:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54776 dst: /127.0.0.1:46603 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:21,862 WARN [BP-705912352-172.31.14.131-1684774616949 heartbeating to localhost/127.0.0.1:36627] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-22 16:57:21,862 WARN [BP-705912352-172.31.14.131-1684774616949 heartbeating to localhost/127.0.0.1:36627] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-705912352-172.31.14.131-1684774616949 (Datanode Uuid 1c8de4ca-4fe7-49e8-bfb5-c9194e531828) service to localhost/127.0.0.1:36627 2023-05-22 16:57:21,863 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data5/current/BP-705912352-172.31.14.131-1684774616949] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:57:21,863 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data6/current/BP-705912352-172.31.14.131-1684774616949] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:57:21,867 WARN [sync.1] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39893,DS-af0bcc6d-85da-4f9f-9398-e5b2344fb281,DISK]] 2023-05-22 16:57:21,867 WARN [sync.1] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39893,DS-af0bcc6d-85da-4f9f-9398-e5b2344fb281,DISK]] 2023-05-22 16:57:21,867 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C37447%2C1684774617624:(num 1684774629695) roll requested 2023-05-22 16:57:21,871 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:47836 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741841_1022]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data8/current]'}, localName='127.0.0.1:39893', datanodeUuid='245e6e26-4475-49a6-aa40-9411ec7cfd9a', xmitsInProgress=0}:Exception transfering block BP-705912352-172.31.14.131-1684774616949:blk_1073741841_1022 to mirror 127.0.0.1:43407: java.net.ConnectException: Connection refused 2023-05-22 16:57:21,872 WARN [Thread-639] hdfs.DataStreamer(1658): Abandoning BP-705912352-172.31.14.131-1684774616949:blk_1073741841_1022 2023-05-22 16:57:21,872 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:47836 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741841_1022]] datanode.DataXceiver(323): 127.0.0.1:39893:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47836 dst: /127.0.0.1:39893 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:21,872 WARN [Thread-639] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:43407,DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0,DISK] 2023-05-22 16:57:21,874 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:34800 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741842_1023]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data10/current]'}, localName='127.0.0.1:44683', datanodeUuid='40e7804b-e0d7-44ab-b2e3-58bf067a092f', xmitsInProgress=0}:Exception transfering block BP-705912352-172.31.14.131-1684774616949:blk_1073741842_1023 to mirror 127.0.0.1:41761: java.net.ConnectException: Connection refused 2023-05-22 16:57:21,874 WARN [Thread-639] hdfs.DataStreamer(1658): Abandoning BP-705912352-172.31.14.131-1684774616949:blk_1073741842_1023 2023-05-22 16:57:21,874 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:34800 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741842_1023]] datanode.DataXceiver(323): 127.0.0.1:44683:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34800 dst: /127.0.0.1:44683 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:21,875 WARN [Thread-639] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK] 2023-05-22 16:57:21,882 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774629695 with entries=2, filesize=2.36 KB; new WAL /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774641868 2023-05-22 16:57:21,882 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44683,DS-d7f31b83-89b7-44b7-b755-15febafc3132,DISK], DatanodeInfoWithStorage[127.0.0.1:39893,DS-af0bcc6d-85da-4f9f-9398-e5b2344fb281,DISK]] 2023-05-22 16:57:21,882 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774629695 is not closed yet, will try archiving it next time 2023-05-22 16:57:24,323 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@5eaf4022] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39893, datanodeUuid=245e6e26-4475-49a6-aa40-9411ec7cfd9a, infoPort=45725, infoSecurePort=0, ipcPort=40247, storageInfo=lv=-57;cid=testClusterID;nsid=1477914146;c=1684774616949):Failed to transfer BP-705912352-172.31.14.131-1684774616949:blk_1073741840_1021 to 127.0.0.1:46603 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:25,872 WARN [Listener at localhost/42253] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-22 16:57:25,874 WARN [ResponseProcessor for block BP-705912352-172.31.14.131-1684774616949:blk_1073741843_1024] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-705912352-172.31.14.131-1684774616949:blk_1073741843_1024 java.io.IOException: Bad response ERROR for BP-705912352-172.31.14.131-1684774616949:blk_1073741843_1024 from datanode DatanodeInfoWithStorage[127.0.0.1:39893,DS-af0bcc6d-85da-4f9f-9398-e5b2344fb281,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-22 16:57:25,874 WARN [DataStreamer for file /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774641868 block BP-705912352-172.31.14.131-1684774616949:blk_1073741843_1024] hdfs.DataStreamer(1548): Error Recovery for BP-705912352-172.31.14.131-1684774616949:blk_1073741843_1024 in pipeline [DatanodeInfoWithStorage[127.0.0.1:44683,DS-d7f31b83-89b7-44b7-b755-15febafc3132,DISK], DatanodeInfoWithStorage[127.0.0.1:39893,DS-af0bcc6d-85da-4f9f-9398-e5b2344fb281,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:39893,DS-af0bcc6d-85da-4f9f-9398-e5b2344fb281,DISK]) is bad. 2023-05-22 16:57:25,874 WARN [PacketResponder: BP-705912352-172.31.14.131-1684774616949:blk_1073741843_1024, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:39893]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:25,876 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:34816 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741843_1024]] datanode.DataXceiver(323): 127.0.0.1:44683:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34816 dst: /127.0.0.1:44683 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:25,877 INFO [Listener at localhost/42253] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 16:57:25,981 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:47842 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741843_1024]] datanode.DataXceiver(323): 127.0.0.1:39893:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47842 dst: /127.0.0.1:39893 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:25,983 WARN [BP-705912352-172.31.14.131-1684774616949 heartbeating to localhost/127.0.0.1:36627] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-22 16:57:25,983 WARN [BP-705912352-172.31.14.131-1684774616949 heartbeating to localhost/127.0.0.1:36627] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-705912352-172.31.14.131-1684774616949 (Datanode Uuid 245e6e26-4475-49a6-aa40-9411ec7cfd9a) service to localhost/127.0.0.1:36627 2023-05-22 16:57:25,983 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data7/current/BP-705912352-172.31.14.131-1684774616949] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:57:25,984 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data8/current/BP-705912352-172.31.14.131-1684774616949] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:57:25,988 WARN [sync.4] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:44683,DS-d7f31b83-89b7-44b7-b755-15febafc3132,DISK]] 2023-05-22 16:57:25,988 WARN [sync.4] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:44683,DS-d7f31b83-89b7-44b7-b755-15febafc3132,DISK]] 2023-05-22 16:57:25,988 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C37447%2C1684774617624:(num 1684774641868) roll requested 2023-05-22 16:57:25,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37447] regionserver.HRegion(9158): Flush requested on 036fee999d8e5493ac953b8d20489665 2023-05-22 16:57:25,993 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:34828 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741844_1026]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data10/current]'}, localName='127.0.0.1:44683', datanodeUuid='40e7804b-e0d7-44ab-b2e3-58bf067a092f', xmitsInProgress=0}:Exception transfering block BP-705912352-172.31.14.131-1684774616949:blk_1073741844_1026 to mirror 127.0.0.1:46603: java.net.ConnectException: Connection refused 2023-05-22 16:57:25,993 WARN [Thread-653] hdfs.DataStreamer(1658): Abandoning BP-705912352-172.31.14.131-1684774616949:blk_1073741844_1026 2023-05-22 16:57:25,993 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 036fee999d8e5493ac953b8d20489665 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-22 16:57:25,993 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:34828 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741844_1026]] datanode.DataXceiver(323): 127.0.0.1:44683:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34828 dst: /127.0.0.1:44683 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:25,994 WARN [Thread-653] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46603,DS-7b42e2f5-3c93-4413-87ec-2248766244c0,DISK] 2023-05-22 16:57:25,997 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:34836 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741845_1027]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data10/current]'}, localName='127.0.0.1:44683', datanodeUuid='40e7804b-e0d7-44ab-b2e3-58bf067a092f', xmitsInProgress=0}:Exception transfering block BP-705912352-172.31.14.131-1684774616949:blk_1073741845_1027 to mirror 127.0.0.1:41761: java.net.ConnectException: Connection refused 2023-05-22 16:57:25,997 WARN [Thread-653] hdfs.DataStreamer(1658): Abandoning BP-705912352-172.31.14.131-1684774616949:blk_1073741845_1027 2023-05-22 16:57:25,997 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:34836 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741845_1027]] datanode.DataXceiver(323): 127.0.0.1:44683:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34836 dst: /127.0.0.1:44683 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:25,997 WARN [Thread-653] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK] 2023-05-22 16:57:26,000 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:34840 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741846_1028]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data10/current]'}, localName='127.0.0.1:44683', datanodeUuid='40e7804b-e0d7-44ab-b2e3-58bf067a092f', xmitsInProgress=0}:Exception transfering block BP-705912352-172.31.14.131-1684774616949:blk_1073741846_1028 to mirror 127.0.0.1:39893: java.net.ConnectException: Connection refused 2023-05-22 16:57:26,000 WARN [Thread-653] hdfs.DataStreamer(1658): Abandoning BP-705912352-172.31.14.131-1684774616949:blk_1073741846_1028 2023-05-22 16:57:26,000 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:34840 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741846_1028]] datanode.DataXceiver(323): 127.0.0.1:44683:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34840 dst: /127.0.0.1:44683 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:26,001 WARN [Thread-653] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39893,DS-af0bcc6d-85da-4f9f-9398-e5b2344fb281,DISK] 2023-05-22 16:57:26,002 WARN [Thread-656] hdfs.DataStreamer(1658): Abandoning BP-705912352-172.31.14.131-1684774616949:blk_1073741847_1029 2023-05-22 16:57:26,002 WARN [Thread-656] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK] 2023-05-22 16:57:26,003 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:34856 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741848_1030]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data10/current]'}, localName='127.0.0.1:44683', datanodeUuid='40e7804b-e0d7-44ab-b2e3-58bf067a092f', xmitsInProgress=0}:Exception transfering block BP-705912352-172.31.14.131-1684774616949:blk_1073741848_1030 to mirror 127.0.0.1:43407: java.net.ConnectException: Connection refused 2023-05-22 16:57:26,003 WARN [Thread-653] hdfs.DataStreamer(1658): Abandoning BP-705912352-172.31.14.131-1684774616949:blk_1073741848_1030 2023-05-22 16:57:26,003 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:34856 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741848_1030]] datanode.DataXceiver(323): 127.0.0.1:44683:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34856 dst: /127.0.0.1:44683 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:26,004 WARN [Thread-656] hdfs.DataStreamer(1658): Abandoning BP-705912352-172.31.14.131-1684774616949:blk_1073741849_1031 2023-05-22 16:57:26,004 WARN [Thread-653] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:43407,DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0,DISK] 2023-05-22 16:57:26,004 WARN [Thread-656] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39893,DS-af0bcc6d-85da-4f9f-9398-e5b2344fb281,DISK] 2023-05-22 16:57:26,005 WARN [IPC Server handler 0 on default port 36627] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-22 16:57:26,005 WARN [IPC Server handler 0 on default port 36627] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-22 16:57:26,005 WARN [IPC Server handler 0 on default port 36627] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-22 16:57:26,007 WARN [Thread-656] hdfs.DataStreamer(1658): Abandoning BP-705912352-172.31.14.131-1684774616949:blk_1073741850_1032 2023-05-22 16:57:26,008 WARN [Thread-656] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:43407,DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0,DISK] 2023-05-22 16:57:26,009 WARN [Thread-656] hdfs.DataStreamer(1658): Abandoning BP-705912352-172.31.14.131-1684774616949:blk_1073741852_1034 2023-05-22 16:57:26,009 WARN [Thread-656] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46603,DS-7b42e2f5-3c93-4413-87ec-2248766244c0,DISK] 2023-05-22 16:57:26,010 WARN [IPC Server handler 3 on default port 36627] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-22 16:57:26,010 WARN [IPC Server handler 3 on default port 36627] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-22 16:57:26,010 WARN [IPC Server handler 3 on default port 36627] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-22 16:57:26,013 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:34862 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741850_1032]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data10/current]'}, localName='127.0.0.1:44683', datanodeUuid='40e7804b-e0d7-44ab-b2e3-58bf067a092f', xmitsInProgress=0}:Exception transfering block BP-705912352-172.31.14.131-1684774616949:blk_1073741850_1032 to mirror 127.0.0.1:43407: java.net.ConnectException: Connection refused 2023-05-22 16:57:26,013 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:34862 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741850_1032]] datanode.DataXceiver(323): 127.0.0.1:44683:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34862 dst: /127.0.0.1:44683 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:26,015 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774641868 with entries=13, filesize=14.09 KB; new WAL /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774645988 2023-05-22 16:57:26,015 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44683,DS-d7f31b83-89b7-44b7-b755-15febafc3132,DISK]] 2023-05-22 16:57:26,016 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774641868 is not closed yet, will try archiving it next time 2023-05-22 16:57:26,209 WARN [sync.2] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:44683,DS-d7f31b83-89b7-44b7-b755-15febafc3132,DISK]] 2023-05-22 16:57:26,209 WARN [sync.2] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:44683,DS-d7f31b83-89b7-44b7-b755-15febafc3132,DISK]] 2023-05-22 16:57:26,209 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C37447%2C1684774617624:(num 1684774645988) roll requested 2023-05-22 16:57:26,213 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:34890 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741854_1036]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data10/current]'}, localName='127.0.0.1:44683', datanodeUuid='40e7804b-e0d7-44ab-b2e3-58bf067a092f', xmitsInProgress=0}:Exception transfering block BP-705912352-172.31.14.131-1684774616949:blk_1073741854_1036 to mirror 127.0.0.1:46603: java.net.ConnectException: Connection refused 2023-05-22 16:57:26,213 WARN [Thread-666] hdfs.DataStreamer(1658): Abandoning BP-705912352-172.31.14.131-1684774616949:blk_1073741854_1036 2023-05-22 16:57:26,213 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:34890 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741854_1036]] datanode.DataXceiver(323): 127.0.0.1:44683:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34890 dst: /127.0.0.1:44683 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:26,214 WARN [Thread-666] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46603,DS-7b42e2f5-3c93-4413-87ec-2248766244c0,DISK] 2023-05-22 16:57:26,215 WARN [Thread-666] hdfs.DataStreamer(1658): Abandoning BP-705912352-172.31.14.131-1684774616949:blk_1073741855_1037 2023-05-22 16:57:26,215 WARN [Thread-666] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK] 2023-05-22 16:57:26,217 WARN [Thread-666] hdfs.DataStreamer(1658): Abandoning BP-705912352-172.31.14.131-1684774616949:blk_1073741856_1038 2023-05-22 16:57:26,217 WARN [Thread-666] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:43407,DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0,DISK] 2023-05-22 16:57:26,219 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:34894 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741857_1039]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data10/current]'}, localName='127.0.0.1:44683', datanodeUuid='40e7804b-e0d7-44ab-b2e3-58bf067a092f', xmitsInProgress=0}:Exception transfering block BP-705912352-172.31.14.131-1684774616949:blk_1073741857_1039 to mirror 127.0.0.1:39893: java.net.ConnectException: Connection refused 2023-05-22 16:57:26,219 WARN [Thread-666] hdfs.DataStreamer(1658): Abandoning BP-705912352-172.31.14.131-1684774616949:blk_1073741857_1039 2023-05-22 16:57:26,220 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:34894 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741857_1039]] datanode.DataXceiver(323): 127.0.0.1:44683:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34894 dst: /127.0.0.1:44683 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:26,220 WARN [Thread-666] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39893,DS-af0bcc6d-85da-4f9f-9398-e5b2344fb281,DISK] 2023-05-22 16:57:26,221 WARN [IPC Server handler 3 on default port 36627] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-22 16:57:26,221 WARN [IPC Server handler 3 on default port 36627] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-22 16:57:26,221 WARN [IPC Server handler 3 on default port 36627] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-22 16:57:26,225 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774645988 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774646209 2023-05-22 16:57:26,225 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44683,DS-d7f31b83-89b7-44b7-b755-15febafc3132,DISK]] 2023-05-22 16:57:26,225 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774641868 is not closed yet, will try archiving it next time 2023-05-22 16:57:26,225 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774645988 is not closed yet, will try archiving it next time 2023-05-22 16:57:26,412 WARN [sync.4] wal.FSHLog(757): Too many consecutive RollWriter requests, it's a sign of the total number of live datanodes is lower than the tolerable replicas. 2023-05-22 16:57:26,417 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=12 (bloomFilter=true), to=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/036fee999d8e5493ac953b8d20489665/.tmp/info/0dae720e9e3048148afe1bb88c93400c 2023-05-22 16:57:26,418 DEBUG [Close-WAL-Writer-0] wal.AbstractFSWAL(716): hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774645988 is not closed yet, will try archiving it next time 2023-05-22 16:57:26,426 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/036fee999d8e5493ac953b8d20489665/.tmp/info/0dae720e9e3048148afe1bb88c93400c as hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/036fee999d8e5493ac953b8d20489665/info/0dae720e9e3048148afe1bb88c93400c 2023-05-22 16:57:26,431 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/036fee999d8e5493ac953b8d20489665/info/0dae720e9e3048148afe1bb88c93400c, entries=5, sequenceid=12, filesize=10.0 K 2023-05-22 16:57:26,432 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=9.45 KB/9681 for 036fee999d8e5493ac953b8d20489665 in 439ms, sequenceid=12, compaction requested=false 2023-05-22 16:57:26,433 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 036fee999d8e5493ac953b8d20489665: 2023-05-22 16:57:26,618 WARN [Listener at localhost/42253] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-22 16:57:26,620 WARN [Listener at localhost/42253] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 16:57:26,621 INFO [Listener at localhost/42253] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 16:57:26,626 INFO [Listener at localhost/42253] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/java.io.tmpdir/Jetty_localhost_42209_datanode____2g28ys/webapp 2023-05-22 16:57:26,629 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774629695 to hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/oldWALs/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774629695 2023-05-22 16:57:26,717 INFO [Listener at localhost/42253] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42209 2023-05-22 16:57:26,724 WARN [Listener at localhost/36147] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 16:57:26,822 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbc305a125a31ae1d: Processing first storage report for DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0 from datanode 358b80bc-69af-4fb5-a138-b4360464ab8a 2023-05-22 16:57:26,823 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbc305a125a31ae1d: from storage DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0 node DatanodeRegistration(127.0.0.1:44937, datanodeUuid=358b80bc-69af-4fb5-a138-b4360464ab8a, infoPort=35837, infoSecurePort=0, ipcPort=36147, storageInfo=lv=-57;cid=testClusterID;nsid=1477914146;c=1684774616949), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-22 16:57:26,823 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbc305a125a31ae1d: Processing first storage report for DS-463143ee-5a8a-4f93-af79-e4a0ad4be6f4 from datanode 358b80bc-69af-4fb5-a138-b4360464ab8a 2023-05-22 16:57:26,824 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbc305a125a31ae1d: from storage DS-463143ee-5a8a-4f93-af79-e4a0ad4be6f4 node DatanodeRegistration(127.0.0.1:44937, datanodeUuid=358b80bc-69af-4fb5-a138-b4360464ab8a, infoPort=35837, infoSecurePort=0, ipcPort=36147, storageInfo=lv=-57;cid=testClusterID;nsid=1477914146;c=1684774616949), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:57:27,820 WARN [master/jenkins-hbase4:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:57:27,821 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C33463%2C1684774617553:(num 1684774617745) roll requested 2023-05-22 16:57:27,825 WARN [Thread-709] hdfs.DataStreamer(1658): Abandoning BP-705912352-172.31.14.131-1684774616949:blk_1073741859_1041 2023-05-22 16:57:27,826 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:57:27,826 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:57:27,827 WARN [Thread-709] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39893,DS-af0bcc6d-85da-4f9f-9398-e5b2344fb281,DISK] 2023-05-22 16:57:27,829 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1359749407_17 at /127.0.0.1:35864 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741860_1042]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data3/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data4/current]'}, localName='127.0.0.1:44937', datanodeUuid='358b80bc-69af-4fb5-a138-b4360464ab8a', xmitsInProgress=0}:Exception transfering block BP-705912352-172.31.14.131-1684774616949:blk_1073741860_1042 to mirror 127.0.0.1:46603: java.net.ConnectException: Connection refused 2023-05-22 16:57:27,829 WARN [Thread-709] hdfs.DataStreamer(1658): Abandoning BP-705912352-172.31.14.131-1684774616949:blk_1073741860_1042 2023-05-22 16:57:27,829 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1359749407_17 at /127.0.0.1:35864 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741860_1042]] datanode.DataXceiver(323): 127.0.0.1:44937:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35864 dst: /127.0.0.1:44937 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:27,830 WARN [Thread-709] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46603,DS-7b42e2f5-3c93-4413-87ec-2248766244c0,DISK] 2023-05-22 16:57:27,836 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-05-22 16:57:27,836 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/MasterData/WALs/jenkins-hbase4.apache.org,33463,1684774617553/jenkins-hbase4.apache.org%2C33463%2C1684774617553.1684774617745 with entries=88, filesize=43.71 KB; new WAL /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/MasterData/WALs/jenkins-hbase4.apache.org,33463,1684774617553/jenkins-hbase4.apache.org%2C33463%2C1684774617553.1684774647821 2023-05-22 16:57:27,836 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44937,DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0,DISK], DatanodeInfoWithStorage[127.0.0.1:44683,DS-d7f31b83-89b7-44b7-b755-15febafc3132,DISK]] 2023-05-22 16:57:27,836 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/MasterData/WALs/jenkins-hbase4.apache.org,33463,1684774617553/jenkins-hbase4.apache.org%2C33463%2C1684774617553.1684774617745 is not closed yet, will try archiving it next time 2023-05-22 16:57:27,836 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:57:27,837 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/MasterData/WALs/jenkins-hbase4.apache.org,33463,1684774617553/jenkins-hbase4.apache.org%2C33463%2C1684774617553.1684774617745; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:57:28,443 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@7b2d1ec4] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:44683, datanodeUuid=40e7804b-e0d7-44ab-b2e3-58bf067a092f, infoPort=42053, infoSecurePort=0, ipcPort=42253, storageInfo=lv=-57;cid=testClusterID;nsid=1477914146;c=1684774616949):Failed to transfer BP-705912352-172.31.14.131-1684774616949:blk_1073741851_1033 to 127.0.0.1:46603 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:39,824 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@25cefac1] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:44937, datanodeUuid=358b80bc-69af-4fb5-a138-b4360464ab8a, infoPort=35837, infoSecurePort=0, ipcPort=36147, storageInfo=lv=-57;cid=testClusterID;nsid=1477914146;c=1684774616949):Failed to transfer BP-705912352-172.31.14.131-1684774616949:blk_1073741837_1013 to 127.0.0.1:39893 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:40,823 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@30e1c4b8] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:44937, datanodeUuid=358b80bc-69af-4fb5-a138-b4360464ab8a, infoPort=35837, infoSecurePort=0, ipcPort=36147, storageInfo=lv=-57;cid=testClusterID;nsid=1477914146;c=1684774616949):Failed to transfer BP-705912352-172.31.14.131-1684774616949:blk_1073741831_1007 to 127.0.0.1:39893 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:42,824 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@6f58b016] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:44937, datanodeUuid=358b80bc-69af-4fb5-a138-b4360464ab8a, infoPort=35837, infoSecurePort=0, ipcPort=36147, storageInfo=lv=-57;cid=testClusterID;nsid=1477914146;c=1684774616949):Failed to transfer BP-705912352-172.31.14.131-1684774616949:blk_1073741828_1004 to 127.0.0.1:39893 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:45,364 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1359749407_17 at /127.0.0.1:49764 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741862_1044]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data10/current]'}, localName='127.0.0.1:44683', datanodeUuid='40e7804b-e0d7-44ab-b2e3-58bf067a092f', xmitsInProgress=0}:Exception transfering block BP-705912352-172.31.14.131-1684774616949:blk_1073741862_1044 to mirror 127.0.0.1:39893: java.net.ConnectException: Connection refused 2023-05-22 16:57:45,364 WARN [Thread-728] hdfs.DataStreamer(1658): Abandoning BP-705912352-172.31.14.131-1684774616949:blk_1073741862_1044 2023-05-22 16:57:45,365 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1359749407_17 at /127.0.0.1:49764 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741862_1044]] datanode.DataXceiver(323): 127.0.0.1:44683:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49764 dst: /127.0.0.1:44683 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:45,365 WARN [Thread-728] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39893,DS-af0bcc6d-85da-4f9f-9398-e5b2344fb281,DISK] 2023-05-22 16:57:45,375 INFO [Listener at localhost/36147] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774646209 with entries=2, filesize=1.57 KB; new WAL /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774665359 2023-05-22 16:57:45,375 DEBUG [Listener at localhost/36147] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44937,DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0,DISK], DatanodeInfoWithStorage[127.0.0.1:44683,DS-d7f31b83-89b7-44b7-b755-15febafc3132,DISK]] 2023-05-22 16:57:45,376 DEBUG [Listener at localhost/36147] wal.AbstractFSWAL(716): hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.1684774646209 is not closed yet, will try archiving it next time 2023-05-22 16:57:45,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37447] regionserver.HRegion(9158): Flush requested on 036fee999d8e5493ac953b8d20489665 2023-05-22 16:57:45,380 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 036fee999d8e5493ac953b8d20489665 1/1 column families, dataSize=10.50 KB heapSize=11.50 KB 2023-05-22 16:57:45,381 INFO [sync.3] wal.FSHLog(774): LowReplication-Roller was enabled. 2023-05-22 16:57:45,388 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:49780 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741864_1046]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data10/current]'}, localName='127.0.0.1:44683', datanodeUuid='40e7804b-e0d7-44ab-b2e3-58bf067a092f', xmitsInProgress=0}:Exception transfering block BP-705912352-172.31.14.131-1684774616949:blk_1073741864_1046 to mirror 127.0.0.1:39893: java.net.ConnectException: Connection refused 2023-05-22 16:57:45,388 WARN [Thread-736] hdfs.DataStreamer(1658): Abandoning BP-705912352-172.31.14.131-1684774616949:blk_1073741864_1046 2023-05-22 16:57:45,388 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:49780 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741864_1046]] datanode.DataXceiver(323): 127.0.0.1:44683:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49780 dst: /127.0.0.1:44683 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:45,389 WARN [Thread-736] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39893,DS-af0bcc6d-85da-4f9f-9398-e5b2344fb281,DISK] 2023-05-22 16:57:45,394 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-22 16:57:45,394 INFO [Listener at localhost/36147] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-22 16:57:45,394 DEBUG [Listener at localhost/36147] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x167f49d9 to 127.0.0.1:64813 2023-05-22 16:57:45,394 DEBUG [Listener at localhost/36147] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 16:57:45,394 DEBUG [Listener at localhost/36147] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-22 16:57:45,394 DEBUG [Listener at localhost/36147] util.JVMClusterUtil(257): Found active master hash=128594594, stopped=false 2023-05-22 16:57:45,394 INFO [Listener at localhost/36147] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,33463,1684774617553 2023-05-22 16:57:45,398 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): regionserver:37447-0x10053d304b90001, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-22 16:57:45,398 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-22 16:57:45,398 INFO [Listener at localhost/36147] procedure2.ProcedureExecutor(629): Stopping 2023-05-22 16:57:45,398 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): regionserver:46747-0x10053d304b90005, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-22 16:57:45,398 DEBUG [Listener at localhost/36147] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2951b6d8 to 127.0.0.1:64813 2023-05-22 16:57:45,398 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:57:45,400 DEBUG [Listener at localhost/36147] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 16:57:45,400 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 16:57:45,400 INFO [Listener at localhost/36147] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,37447,1684774617624' ***** 2023-05-22 16:57:45,400 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37447-0x10053d304b90001, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 16:57:45,400 INFO [Listener at localhost/36147] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-22 16:57:45,400 INFO [Listener at localhost/36147] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,46747,1684774618790' ***** 2023-05-22 16:57:45,400 INFO [Listener at localhost/36147] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-22 16:57:45,400 INFO [RS:0;jenkins-hbase4:37447] regionserver.HeapMemoryManager(220): Stopping 2023-05-22 16:57:45,401 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46747-0x10053d304b90005, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 16:57:45,401 INFO [RS:1;jenkins-hbase4:46747] regionserver.HeapMemoryManager(220): Stopping 2023-05-22 16:57:45,401 INFO [RS:1;jenkins-hbase4:46747] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-22 16:57:45,401 INFO [RS:1;jenkins-hbase4:46747] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-22 16:57:45,401 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-22 16:57:45,401 INFO [RS:1;jenkins-hbase4:46747] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46747,1684774618790 2023-05-22 16:57:45,402 DEBUG [RS:1;jenkins-hbase4:46747] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6c08d862 to 127.0.0.1:64813 2023-05-22 16:57:45,402 DEBUG [RS:1;jenkins-hbase4:46747] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 16:57:45,402 INFO [RS:1;jenkins-hbase4:46747] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46747,1684774618790; all regions closed. 2023-05-22 16:57:45,402 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,46747,1684774618790 2023-05-22 16:57:45,402 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.50 KB at sequenceid=25 (bloomFilter=true), to=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/036fee999d8e5493ac953b8d20489665/.tmp/info/6c29f637d5344ab6b114247bcba55a3c 2023-05-22 16:57:45,404 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:57:45,405 ERROR [RS:1;jenkins-hbase4:46747] regionserver.HRegionServer(1539): Shutdown / close of WAL failed: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK]] are bad. Aborting... 2023-05-22 16:57:45,405 DEBUG [RS:1;jenkins-hbase4:46747] regionserver.HRegionServer(1540): Shutdown / close exception details: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:57:45,405 DEBUG [RS:1;jenkins-hbase4:46747] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 16:57:45,405 INFO [RS:1;jenkins-hbase4:46747] regionserver.LeaseManager(133): Closed leases 2023-05-22 16:57:45,405 INFO [RS:1;jenkins-hbase4:46747] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-22 16:57:45,405 INFO [RS:1;jenkins-hbase4:46747] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-22 16:57:45,405 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-22 16:57:45,405 INFO [RS:1;jenkins-hbase4:46747] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-22 16:57:45,405 INFO [RS:1;jenkins-hbase4:46747] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-22 16:57:45,406 INFO [RS:1;jenkins-hbase4:46747] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46747 2023-05-22 16:57:45,413 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): regionserver:37447-0x10053d304b90001, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46747,1684774618790 2023-05-22 16:57:45,413 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): regionserver:46747-0x10053d304b90005, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46747,1684774618790 2023-05-22 16:57:45,413 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-22 16:57:45,413 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): regionserver:37447-0x10053d304b90001, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-22 16:57:45,413 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): regionserver:46747-0x10053d304b90005, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-22 16:57:45,414 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46747,1684774618790] 2023-05-22 16:57:45,414 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46747,1684774618790; numProcessing=1 2023-05-22 16:57:45,416 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46747,1684774618790 already deleted, retry=false 2023-05-22 16:57:45,416 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46747,1684774618790 expired; onlineServers=1 2023-05-22 16:57:45,419 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/036fee999d8e5493ac953b8d20489665/.tmp/info/6c29f637d5344ab6b114247bcba55a3c as hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/036fee999d8e5493ac953b8d20489665/info/6c29f637d5344ab6b114247bcba55a3c 2023-05-22 16:57:45,426 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/036fee999d8e5493ac953b8d20489665/info/6c29f637d5344ab6b114247bcba55a3c, entries=8, sequenceid=25, filesize=13.2 K 2023-05-22 16:57:45,427 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.50 KB/10757, heapSize ~11.48 KB/11760, currentSize=9.46 KB/9684 for 036fee999d8e5493ac953b8d20489665 in 47ms, sequenceid=25, compaction requested=false 2023-05-22 16:57:45,427 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 036fee999d8e5493ac953b8d20489665: 2023-05-22 16:57:45,427 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=23.2 K, sizeToCheck=16.0 K 2023-05-22 16:57:45,427 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-22 16:57:45,427 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/036fee999d8e5493ac953b8d20489665/info/6c29f637d5344ab6b114247bcba55a3c because midkey is the same as first or last row 2023-05-22 16:57:45,427 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-22 16:57:45,427 INFO [RS:0;jenkins-hbase4:37447] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-22 16:57:45,428 INFO [RS:0;jenkins-hbase4:37447] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-22 16:57:45,428 INFO [RS:0;jenkins-hbase4:37447] regionserver.HRegionServer(3303): Received CLOSE for 5012e6cb0c696799837306c620b4ef17 2023-05-22 16:57:45,430 INFO [RS:0;jenkins-hbase4:37447] regionserver.HRegionServer(3303): Received CLOSE for 036fee999d8e5493ac953b8d20489665 2023-05-22 16:57:45,430 INFO [RS:0;jenkins-hbase4:37447] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37447,1684774617624 2023-05-22 16:57:45,431 DEBUG [RS:0;jenkins-hbase4:37447] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7fef4d1b to 127.0.0.1:64813 2023-05-22 16:57:45,431 DEBUG [RS:0;jenkins-hbase4:37447] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 16:57:45,431 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5012e6cb0c696799837306c620b4ef17, disabling compactions & flushes 2023-05-22 16:57:45,431 INFO [RS:0;jenkins-hbase4:37447] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-22 16:57:45,431 INFO [RS:0;jenkins-hbase4:37447] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-22 16:57:45,431 INFO [RS:0;jenkins-hbase4:37447] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-22 16:57:45,431 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684774618263.5012e6cb0c696799837306c620b4ef17. 2023-05-22 16:57:45,431 INFO [RS:0;jenkins-hbase4:37447] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-22 16:57:45,432 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684774618263.5012e6cb0c696799837306c620b4ef17. 2023-05-22 16:57:45,432 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684774618263.5012e6cb0c696799837306c620b4ef17. after waiting 0 ms 2023-05-22 16:57:45,432 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684774618263.5012e6cb0c696799837306c620b4ef17. 2023-05-22 16:57:45,432 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 5012e6cb0c696799837306c620b4ef17 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-22 16:57:45,432 INFO [RS:0;jenkins-hbase4:37447] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-22 16:57:45,432 DEBUG [RS:0;jenkins-hbase4:37447] regionserver.HRegionServer(1478): Online Regions={5012e6cb0c696799837306c620b4ef17=hbase:namespace,,1684774618263.5012e6cb0c696799837306c620b4ef17., 1588230740=hbase:meta,,1.1588230740, 036fee999d8e5493ac953b8d20489665=TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665.} 2023-05-22 16:57:45,433 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-22 16:57:45,433 DEBUG [RS:0;jenkins-hbase4:37447] regionserver.HRegionServer(1504): Waiting on 036fee999d8e5493ac953b8d20489665, 1588230740, 5012e6cb0c696799837306c620b4ef17 2023-05-22 16:57:45,433 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-22 16:57:45,433 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-22 16:57:45,433 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-22 16:57:45,433 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-22 16:57:45,433 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.92 KB heapSize=5.45 KB 2023-05-22 16:57:45,433 WARN [RS_OPEN_META-regionserver/jenkins-hbase4:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:57:45,434 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C37447%2C1684774617624.meta:.meta(num 1684774618206) roll requested 2023-05-22 16:57:45,434 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-22 16:57:45,435 ERROR [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase4.apache.org,37447,1684774617624: Unrecoverable exception while closing hbase:meta,,1.1588230740 ***** org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:57:45,436 ERROR [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-05-22 16:57:45,439 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-05-22 16:57:45,441 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-05-22 16:57:45,441 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-05-22 16:57:45,441 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-05-22 16:57:45,441 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "Verbose": false, "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 985137152, "init": 513802240, "max": 2051014656, "used": 397114888 }, "NonHeapMemoryUsage": { "committed": 133652480, "init": 2555904, "max": -1, "used": 130955912 }, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-05-22 16:57:45,446 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33463] master.MasterRpcServices(609): jenkins-hbase4.apache.org,37447,1684774617624 reported a fatal error: ***** ABORTING region server jenkins-hbase4.apache.org,37447,1684774617624: Unrecoverable exception while closing hbase:meta,,1.1588230740 ***** Cause: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:57:45,447 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:49802 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741867_1049]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data10/current]'}, localName='127.0.0.1:44683', datanodeUuid='40e7804b-e0d7-44ab-b2e3-58bf067a092f', xmitsInProgress=0}:Exception transfering block BP-705912352-172.31.14.131-1684774616949:blk_1073741867_1049 to mirror 127.0.0.1:39893: java.net.ConnectException: Connection refused 2023-05-22 16:57:45,447 WARN [Thread-746] hdfs.DataStreamer(1658): Abandoning BP-705912352-172.31.14.131-1684774616949:blk_1073741867_1049 2023-05-22 16:57:45,447 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866860234_17 at /127.0.0.1:49802 [Receiving block BP-705912352-172.31.14.131-1684774616949:blk_1073741867_1049]] datanode.DataXceiver(323): 127.0.0.1:44683:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49802 dst: /127.0.0.1:44683 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:45,448 WARN [Thread-746] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39893,DS-af0bcc6d-85da-4f9f-9398-e5b2344fb281,DISK] 2023-05-22 16:57:45,454 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL 2023-05-22 16:57:45,454 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.meta.1684774618206.meta with entries=11, filesize=3.69 KB; new WAL /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.meta.1684774665434.meta 2023-05-22 16:57:45,455 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44937,DS-acdb3ebb-327c-4796-bcf0-9a444c167fd0,DISK], DatanodeInfoWithStorage[127.0.0.1:44683,DS-d7f31b83-89b7-44b7-b755-15febafc3132,DISK]] 2023-05-22 16:57:45,455 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.meta.1684774618206.meta is not closed yet, will try archiving it next time 2023-05-22 16:57:45,455 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:57:45,455 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624/jenkins-hbase4.apache.org%2C37447%2C1684774617624.meta.1684774618206.meta; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41761,DS-30eb6f6b-1ee3-42ef-b907-62bccbbcaf62,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:57:45,459 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/namespace/5012e6cb0c696799837306c620b4ef17/.tmp/info/0891324f4635466daab9ce75041cb965 2023-05-22 16:57:45,465 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/namespace/5012e6cb0c696799837306c620b4ef17/.tmp/info/0891324f4635466daab9ce75041cb965 as hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/namespace/5012e6cb0c696799837306c620b4ef17/info/0891324f4635466daab9ce75041cb965 2023-05-22 16:57:45,471 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/namespace/5012e6cb0c696799837306c620b4ef17/info/0891324f4635466daab9ce75041cb965, entries=2, sequenceid=6, filesize=4.8 K 2023-05-22 16:57:45,472 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 5012e6cb0c696799837306c620b4ef17 in 40ms, sequenceid=6, compaction requested=false 2023-05-22 16:57:45,477 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/data/hbase/namespace/5012e6cb0c696799837306c620b4ef17/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-22 16:57:45,478 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684774618263.5012e6cb0c696799837306c620b4ef17. 2023-05-22 16:57:45,478 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5012e6cb0c696799837306c620b4ef17: 2023-05-22 16:57:45,479 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1684774618263.5012e6cb0c696799837306c620b4ef17. 2023-05-22 16:57:45,479 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 036fee999d8e5493ac953b8d20489665, disabling compactions & flushes 2023-05-22 16:57:45,479 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665. 2023-05-22 16:57:45,479 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665. 2023-05-22 16:57:45,479 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665. after waiting 0 ms 2023-05-22 16:57:45,479 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665. 2023-05-22 16:57:45,479 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 036fee999d8e5493ac953b8d20489665: 2023-05-22 16:57:45,479 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665. 2023-05-22 16:57:45,633 INFO [RS:0;jenkins-hbase4:37447] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-22 16:57:45,633 INFO [RS:0;jenkins-hbase4:37447] regionserver.HRegionServer(3303): Received CLOSE for 036fee999d8e5493ac953b8d20489665 2023-05-22 16:57:45,633 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-22 16:57:45,633 DEBUG [RS:0;jenkins-hbase4:37447] regionserver.HRegionServer(1504): Waiting on 036fee999d8e5493ac953b8d20489665, 1588230740 2023-05-22 16:57:45,633 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-22 16:57:45,633 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 036fee999d8e5493ac953b8d20489665, disabling compactions & flushes 2023-05-22 16:57:45,633 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-22 16:57:45,633 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665. 2023-05-22 16:57:45,633 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-22 16:57:45,633 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665. 2023-05-22 16:57:45,634 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-22 16:57:45,634 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665. after waiting 0 ms 2023-05-22 16:57:45,634 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-22 16:57:45,634 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665. 2023-05-22 16:57:45,634 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 036fee999d8e5493ac953b8d20489665: 2023-05-22 16:57:45,634 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-22 16:57:45,634 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnDatanodeDeath,,1684774618922.036fee999d8e5493ac953b8d20489665. 2023-05-22 16:57:45,697 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): regionserver:46747-0x10053d304b90005, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 16:57:45,697 INFO [RS:1;jenkins-hbase4:46747] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46747,1684774618790; zookeeper connection closed. 2023-05-22 16:57:45,697 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): regionserver:46747-0x10053d304b90005, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 16:57:45,698 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4a2149b2] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4a2149b2 2023-05-22 16:57:45,823 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@d68eec4] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:44937, datanodeUuid=358b80bc-69af-4fb5-a138-b4360464ab8a, infoPort=35837, infoSecurePort=0, ipcPort=36147, storageInfo=lv=-57;cid=testClusterID;nsid=1477914146;c=1684774616949):Failed to transfer BP-705912352-172.31.14.131-1684774616949:blk_1073741825_1001 to 127.0.0.1:39893 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:57:45,833 INFO [RS:0;jenkins-hbase4:37447] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-05-22 16:57:45,833 INFO [RS:0;jenkins-hbase4:37447] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37447,1684774617624; all regions closed. 2023-05-22 16:57:45,834 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624 2023-05-22 16:57:45,839 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/WALs/jenkins-hbase4.apache.org,37447,1684774617624 2023-05-22 16:57:45,842 DEBUG [RS:0;jenkins-hbase4:37447] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 16:57:45,842 INFO [RS:0;jenkins-hbase4:37447] regionserver.LeaseManager(133): Closed leases 2023-05-22 16:57:45,843 INFO [RS:0;jenkins-hbase4:37447] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-22 16:57:45,843 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-22 16:57:45,843 INFO [RS:0;jenkins-hbase4:37447] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37447 2023-05-22 16:57:45,845 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): regionserver:37447-0x10053d304b90001, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37447,1684774617624 2023-05-22 16:57:45,845 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-22 16:57:45,846 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37447,1684774617624] 2023-05-22 16:57:45,847 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37447,1684774617624; numProcessing=2 2023-05-22 16:57:45,848 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37447,1684774617624 already deleted, retry=false 2023-05-22 16:57:45,848 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37447,1684774617624 expired; onlineServers=0 2023-05-22 16:57:45,848 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,33463,1684774617553' ***** 2023-05-22 16:57:45,848 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-22 16:57:45,848 DEBUG [M:0;jenkins-hbase4:33463] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@395b68f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-22 16:57:45,848 INFO [M:0;jenkins-hbase4:33463] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33463,1684774617553 2023-05-22 16:57:45,848 INFO [M:0;jenkins-hbase4:33463] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33463,1684774617553; all regions closed. 2023-05-22 16:57:45,848 DEBUG [M:0;jenkins-hbase4:33463] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 16:57:45,848 DEBUG [M:0;jenkins-hbase4:33463] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-22 16:57:45,849 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-22 16:57:45,849 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1684774617824] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1684774617824,5,FailOnTimeoutGroup] 2023-05-22 16:57:45,849 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1684774617824] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1684774617824,5,FailOnTimeoutGroup] 2023-05-22 16:57:45,849 DEBUG [M:0;jenkins-hbase4:33463] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-22 16:57:45,850 INFO [M:0;jenkins-hbase4:33463] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-22 16:57:45,850 INFO [M:0;jenkins-hbase4:33463] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-22 16:57:45,850 INFO [M:0;jenkins-hbase4:33463] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-22 16:57:45,850 DEBUG [M:0;jenkins-hbase4:33463] master.HMaster(1512): Stopping service threads 2023-05-22 16:57:45,850 INFO [M:0;jenkins-hbase4:33463] procedure2.RemoteProcedureDispatcher(118): Stopping procedure remote dispatcher 2023-05-22 16:57:45,851 ERROR [M:0;jenkins-hbase4:33463] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-22 16:57:45,851 INFO [M:0;jenkins-hbase4:33463] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-22 16:57:45,851 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-22 16:57:45,852 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-22 16:57:45,852 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:57:45,852 DEBUG [M:0;jenkins-hbase4:33463] zookeeper.ZKUtil(398): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-22 16:57:45,852 WARN [M:0;jenkins-hbase4:33463] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-22 16:57:45,852 INFO [M:0;jenkins-hbase4:33463] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-22 16:57:45,852 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-22 16:57:45,852 INFO [M:0;jenkins-hbase4:33463] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-22 16:57:45,853 DEBUG [M:0;jenkins-hbase4:33463] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-22 16:57:45,853 INFO [M:0;jenkins-hbase4:33463] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:57:45,853 DEBUG [M:0;jenkins-hbase4:33463] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:57:45,853 DEBUG [M:0;jenkins-hbase4:33463] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-22 16:57:45,853 DEBUG [M:0;jenkins-hbase4:33463] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:57:45,853 INFO [M:0;jenkins-hbase4:33463] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.08 KB heapSize=45.73 KB 2023-05-22 16:57:45,861 WARN [Thread-764] hdfs.DataStreamer(1658): Abandoning BP-705912352-172.31.14.131-1684774616949:blk_1073741869_1051 2023-05-22 16:57:45,861 WARN [Thread-764] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39893,DS-af0bcc6d-85da-4f9f-9398-e5b2344fb281,DISK] 2023-05-22 16:57:45,866 INFO [M:0;jenkins-hbase4:33463] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.08 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9ef438381786462188fd3b303bdaf8b8 2023-05-22 16:57:45,872 DEBUG [M:0;jenkins-hbase4:33463] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9ef438381786462188fd3b303bdaf8b8 as hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9ef438381786462188fd3b303bdaf8b8 2023-05-22 16:57:45,877 INFO [M:0;jenkins-hbase4:33463] regionserver.HStore(1080): Added hdfs://localhost:36627/user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9ef438381786462188fd3b303bdaf8b8, entries=11, sequenceid=92, filesize=7.0 K 2023-05-22 16:57:45,878 INFO [M:0;jenkins-hbase4:33463] regionserver.HRegion(2948): Finished flush of dataSize ~38.08 KB/38997, heapSize ~45.72 KB/46816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 25ms, sequenceid=92, compaction requested=false 2023-05-22 16:57:45,879 INFO [M:0;jenkins-hbase4:33463] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:57:45,879 DEBUG [M:0;jenkins-hbase4:33463] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-22 16:57:45,879 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/0963ead9-5036-d440-c6fc-966f820f9e2a/MasterData/WALs/jenkins-hbase4.apache.org,33463,1684774617553 2023-05-22 16:57:45,882 INFO [M:0;jenkins-hbase4:33463] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-22 16:57:45,882 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-22 16:57:45,883 INFO [M:0;jenkins-hbase4:33463] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33463 2023-05-22 16:57:45,885 DEBUG [M:0;jenkins-hbase4:33463] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,33463,1684774617553 already deleted, retry=false 2023-05-22 16:57:45,916 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-22 16:57:45,998 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 16:57:45,998 INFO [M:0;jenkins-hbase4:33463] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33463,1684774617553; zookeeper connection closed. 2023-05-22 16:57:45,998 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): master:33463-0x10053d304b90000, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 16:57:46,098 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): regionserver:37447-0x10053d304b90001, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 16:57:46,098 INFO [RS:0;jenkins-hbase4:37447] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37447,1684774617624; zookeeper connection closed. 2023-05-22 16:57:46,098 DEBUG [Listener at localhost/39615-EventThread] zookeeper.ZKWatcher(600): regionserver:37447-0x10053d304b90001, quorum=127.0.0.1:64813, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 16:57:46,099 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1819cf05] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1819cf05 2023-05-22 16:57:46,100 INFO [Listener at localhost/36147] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 2 regionserver(s) complete 2023-05-22 16:57:46,100 WARN [Listener at localhost/36147] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-22 16:57:46,104 INFO [Listener at localhost/36147] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 16:57:46,208 WARN [BP-705912352-172.31.14.131-1684774616949 heartbeating to localhost/127.0.0.1:36627] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-22 16:57:46,208 WARN [BP-705912352-172.31.14.131-1684774616949 heartbeating to localhost/127.0.0.1:36627] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-705912352-172.31.14.131-1684774616949 (Datanode Uuid 358b80bc-69af-4fb5-a138-b4360464ab8a) service to localhost/127.0.0.1:36627 2023-05-22 16:57:46,208 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data3/current/BP-705912352-172.31.14.131-1684774616949] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:57:46,209 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data4/current/BP-705912352-172.31.14.131-1684774616949] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:57:46,210 WARN [Listener at localhost/36147] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-22 16:57:46,213 INFO [Listener at localhost/36147] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 16:57:46,316 WARN [BP-705912352-172.31.14.131-1684774616949 heartbeating to localhost/127.0.0.1:36627] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-22 16:57:46,316 WARN [BP-705912352-172.31.14.131-1684774616949 heartbeating to localhost/127.0.0.1:36627] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-705912352-172.31.14.131-1684774616949 (Datanode Uuid 40e7804b-e0d7-44ab-b2e3-58bf067a092f) service to localhost/127.0.0.1:36627 2023-05-22 16:57:46,317 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data9/current/BP-705912352-172.31.14.131-1684774616949] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:57:46,317 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/cluster_778ae4d5-15f4-5c32-d946-5b07f3dfb182/dfs/data/data10/current/BP-705912352-172.31.14.131-1684774616949] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:57:46,328 INFO [Listener at localhost/36147] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 16:57:46,444 INFO [Listener at localhost/36147] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-22 16:57:46,479 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-22 16:57:46,489 INFO [Listener at localhost/36147] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=74 (was 51) Potentially hanging thread: RS-EventLoopGroup-5-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.2@localhost:36627 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: LeaseRenewer:jenkins@localhost:36627 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Abort regionserver monitor java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (466462416) connection to localhost/127.0.0.1:36627 from jenkins.hfs.1 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase4:0.leaseChecker java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.regionserver.LeaseManager.run(LeaseManager.java:82) Potentially hanging thread: nioEventLoopGroup-16-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (466462416) connection to localhost/127.0.0.1:36627 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ForkJoinPool-2-worker-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: IPC Client (466462416) connection to localhost/127.0.0.1:36627 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-16-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: nioEventLoopGroup-17-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.1@localhost:36627 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36147 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=460 (was 439) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=95 (was 91) - SystemLoadAverage LEAK? -, ProcessCount=169 (was 169), AvailableMemoryMB=5516 (was 5958) 2023-05-22 16:57:46,498 INFO [Listener at localhost/36147] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=74, OpenFileDescriptor=460, MaxFileDescriptor=60000, SystemLoadAverage=95, ProcessCount=169, AvailableMemoryMB=5516 2023-05-22 16:57:46,498 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-22 16:57:46,498 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/hadoop.log.dir so I do NOT create it in target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401 2023-05-22 16:57:46,498 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5570634d-4595-b066-964e-41db08c8ccde/hadoop.tmp.dir so I do NOT create it in target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401 2023-05-22 16:57:46,498 INFO [Listener at localhost/36147] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/cluster_b48852c6-ec88-6722-faa9-8789e7cb8f82, deleteOnExit=true 2023-05-22 16:57:46,499 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-22 16:57:46,499 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/test.cache.data in system properties and HBase conf 2023-05-22 16:57:46,499 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/hadoop.tmp.dir in system properties and HBase conf 2023-05-22 16:57:46,499 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/hadoop.log.dir in system properties and HBase conf 2023-05-22 16:57:46,499 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-22 16:57:46,500 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-22 16:57:46,500 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-22 16:57:46,500 DEBUG [Listener at localhost/36147] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-22 16:57:46,500 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-22 16:57:46,500 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-22 16:57:46,500 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-22 16:57:46,501 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-22 16:57:46,501 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-22 16:57:46,501 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-22 16:57:46,501 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-22 16:57:46,501 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-22 16:57:46,501 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-22 16:57:46,501 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/nfs.dump.dir in system properties and HBase conf 2023-05-22 16:57:46,501 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/java.io.tmpdir in system properties and HBase conf 2023-05-22 16:57:46,501 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-22 16:57:46,501 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-22 16:57:46,501 INFO [Listener at localhost/36147] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-22 16:57:46,503 WARN [Listener at localhost/36147] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-22 16:57:46,506 WARN [Listener at localhost/36147] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-22 16:57:46,507 WARN [Listener at localhost/36147] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-22 16:57:46,553 WARN [Listener at localhost/36147] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 16:57:46,555 INFO [Listener at localhost/36147] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 16:57:46,561 INFO [Listener at localhost/36147] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/java.io.tmpdir/Jetty_localhost_38387_hdfs____q7k8ev/webapp 2023-05-22 16:57:46,651 INFO [Listener at localhost/36147] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38387 2023-05-22 16:57:46,652 WARN [Listener at localhost/36147] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-22 16:57:46,655 WARN [Listener at localhost/36147] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-22 16:57:46,656 WARN [Listener at localhost/36147] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-22 16:57:46,695 WARN [Listener at localhost/35761] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 16:57:46,704 WARN [Listener at localhost/35761] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-22 16:57:46,706 WARN [Listener at localhost/35761] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 16:57:46,707 INFO [Listener at localhost/35761] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 16:57:46,711 INFO [Listener at localhost/35761] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/java.io.tmpdir/Jetty_localhost_43115_datanode____.jvpmub/webapp 2023-05-22 16:57:46,802 INFO [Listener at localhost/35761] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43115 2023-05-22 16:57:46,808 WARN [Listener at localhost/45351] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 16:57:46,824 WARN [Listener at localhost/45351] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-22 16:57:46,827 WARN [Listener at localhost/45351] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 16:57:46,829 INFO [Listener at localhost/45351] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 16:57:46,834 INFO [Listener at localhost/45351] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/java.io.tmpdir/Jetty_localhost_40977_datanode____2z1t1g/webapp 2023-05-22 16:57:46,894 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-22 16:57:46,903 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdbd3e3543bfab918: Processing first storage report for DS-2d49f7ba-7197-4f3b-939d-e193de5ba405 from datanode 118b38cc-8262-4d98-a666-1a9802e7909e 2023-05-22 16:57:46,903 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdbd3e3543bfab918: from storage DS-2d49f7ba-7197-4f3b-939d-e193de5ba405 node DatanodeRegistration(127.0.0.1:34909, datanodeUuid=118b38cc-8262-4d98-a666-1a9802e7909e, infoPort=35705, infoSecurePort=0, ipcPort=45351, storageInfo=lv=-57;cid=testClusterID;nsid=994457562;c=1684774666509), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:57:46,903 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdbd3e3543bfab918: Processing first storage report for DS-75ea55fd-3950-4a3c-9c7e-b45418924534 from datanode 118b38cc-8262-4d98-a666-1a9802e7909e 2023-05-22 16:57:46,903 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdbd3e3543bfab918: from storage DS-75ea55fd-3950-4a3c-9c7e-b45418924534 node DatanodeRegistration(127.0.0.1:34909, datanodeUuid=118b38cc-8262-4d98-a666-1a9802e7909e, infoPort=35705, infoSecurePort=0, ipcPort=45351, storageInfo=lv=-57;cid=testClusterID;nsid=994457562;c=1684774666509), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:57:46,937 INFO [Listener at localhost/45351] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40977 2023-05-22 16:57:46,944 WARN [Listener at localhost/42493] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 16:57:47,038 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xce7baf1856014bcf: Processing first storage report for DS-6c375f34-cb98-4373-8d8a-593a8c80713b from datanode 672a4885-5bed-4f34-bcc8-7168b7ce02b7 2023-05-22 16:57:47,038 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xce7baf1856014bcf: from storage DS-6c375f34-cb98-4373-8d8a-593a8c80713b node DatanodeRegistration(127.0.0.1:46777, datanodeUuid=672a4885-5bed-4f34-bcc8-7168b7ce02b7, infoPort=38627, infoSecurePort=0, ipcPort=42493, storageInfo=lv=-57;cid=testClusterID;nsid=994457562;c=1684774666509), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:57:47,038 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xce7baf1856014bcf: Processing first storage report for DS-a1f7cf11-bbe5-40b8-a689-a773eda3cc0a from datanode 672a4885-5bed-4f34-bcc8-7168b7ce02b7 2023-05-22 16:57:47,038 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xce7baf1856014bcf: from storage DS-a1f7cf11-bbe5-40b8-a689-a773eda3cc0a node DatanodeRegistration(127.0.0.1:46777, datanodeUuid=672a4885-5bed-4f34-bcc8-7168b7ce02b7, infoPort=38627, infoSecurePort=0, ipcPort=42493, storageInfo=lv=-57;cid=testClusterID;nsid=994457562;c=1684774666509), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:57:47,052 DEBUG [Listener at localhost/42493] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401 2023-05-22 16:57:47,054 INFO [Listener at localhost/42493] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/cluster_b48852c6-ec88-6722-faa9-8789e7cb8f82/zookeeper_0, clientPort=62530, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/cluster_b48852c6-ec88-6722-faa9-8789e7cb8f82/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/cluster_b48852c6-ec88-6722-faa9-8789e7cb8f82/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-22 16:57:47,056 INFO [Listener at localhost/42493] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=62530 2023-05-22 16:57:47,056 INFO [Listener at localhost/42493] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:57:47,057 INFO [Listener at localhost/42493] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:57:47,073 INFO [Listener at localhost/42493] util.FSUtils(471): Created version file at hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f with version=8 2023-05-22 16:57:47,073 INFO [Listener at localhost/42493] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/hbase-staging 2023-05-22 16:57:47,075 INFO [Listener at localhost/42493] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-22 16:57:47,075 INFO [Listener at localhost/42493] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 16:57:47,075 INFO [Listener at localhost/42493] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-22 16:57:47,075 INFO [Listener at localhost/42493] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-22 16:57:47,075 INFO [Listener at localhost/42493] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 16:57:47,075 INFO [Listener at localhost/42493] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-22 16:57:47,075 INFO [Listener at localhost/42493] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-22 16:57:47,077 INFO [Listener at localhost/42493] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44291 2023-05-22 16:57:47,077 INFO [Listener at localhost/42493] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:57:47,078 INFO [Listener at localhost/42493] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:57:47,079 INFO [Listener at localhost/42493] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44291 connecting to ZooKeeper ensemble=127.0.0.1:62530 2023-05-22 16:57:47,085 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:442910x0, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-22 16:57:47,086 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44291-0x10053d3c6300000 connected 2023-05-22 16:57:47,101 DEBUG [Listener at localhost/42493] zookeeper.ZKUtil(164): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-22 16:57:47,101 DEBUG [Listener at localhost/42493] zookeeper.ZKUtil(164): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 16:57:47,102 DEBUG [Listener at localhost/42493] zookeeper.ZKUtil(164): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-22 16:57:47,102 DEBUG [Listener at localhost/42493] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44291 2023-05-22 16:57:47,102 DEBUG [Listener at localhost/42493] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44291 2023-05-22 16:57:47,102 DEBUG [Listener at localhost/42493] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44291 2023-05-22 16:57:47,103 DEBUG [Listener at localhost/42493] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44291 2023-05-22 16:57:47,103 DEBUG [Listener at localhost/42493] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44291 2023-05-22 16:57:47,103 INFO [Listener at localhost/42493] master.HMaster(444): hbase.rootdir=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f, hbase.cluster.distributed=false 2023-05-22 16:57:47,116 INFO [Listener at localhost/42493] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-22 16:57:47,116 INFO [Listener at localhost/42493] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 16:57:47,116 INFO [Listener at localhost/42493] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-22 16:57:47,116 INFO [Listener at localhost/42493] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-22 16:57:47,116 INFO [Listener at localhost/42493] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 16:57:47,116 INFO [Listener at localhost/42493] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-22 16:57:47,116 INFO [Listener at localhost/42493] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-22 16:57:47,117 INFO [Listener at localhost/42493] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39651 2023-05-22 16:57:47,118 INFO [Listener at localhost/42493] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-22 16:57:47,118 DEBUG [Listener at localhost/42493] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-22 16:57:47,119 INFO [Listener at localhost/42493] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:57:47,120 INFO [Listener at localhost/42493] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:57:47,121 INFO [Listener at localhost/42493] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39651 connecting to ZooKeeper ensemble=127.0.0.1:62530 2023-05-22 16:57:47,124 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): regionserver:396510x0, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-22 16:57:47,125 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39651-0x10053d3c6300001 connected 2023-05-22 16:57:47,125 DEBUG [Listener at localhost/42493] zookeeper.ZKUtil(164): regionserver:39651-0x10053d3c6300001, quorum=127.0.0.1:62530, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-22 16:57:47,125 DEBUG [Listener at localhost/42493] zookeeper.ZKUtil(164): regionserver:39651-0x10053d3c6300001, quorum=127.0.0.1:62530, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 16:57:47,126 DEBUG [Listener at localhost/42493] zookeeper.ZKUtil(164): regionserver:39651-0x10053d3c6300001, quorum=127.0.0.1:62530, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-22 16:57:47,126 DEBUG [Listener at localhost/42493] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39651 2023-05-22 16:57:47,126 DEBUG [Listener at localhost/42493] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39651 2023-05-22 16:57:47,127 DEBUG [Listener at localhost/42493] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39651 2023-05-22 16:57:47,127 DEBUG [Listener at localhost/42493] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39651 2023-05-22 16:57:47,127 DEBUG [Listener at localhost/42493] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39651 2023-05-22 16:57:47,128 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,44291,1684774667074 2023-05-22 16:57:47,130 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-22 16:57:47,130 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,44291,1684774667074 2023-05-22 16:57:47,132 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-22 16:57:47,132 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): regionserver:39651-0x10053d3c6300001, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-22 16:57:47,132 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:57:47,133 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-22 16:57:47,134 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,44291,1684774667074 from backup master directory 2023-05-22 16:57:47,134 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-22 16:57:47,135 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,44291,1684774667074 2023-05-22 16:57:47,135 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-22 16:57:47,135 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-22 16:57:47,135 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,44291,1684774667074 2023-05-22 16:57:47,148 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/hbase.id with ID: fc37f0ca-e3d4-4269-987f-e5dd5a951fcb 2023-05-22 16:57:47,158 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:57:47,161 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:57:47,168 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x671ca5df to 127.0.0.1:62530 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-22 16:57:47,171 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3d423ca1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-22 16:57:47,172 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-22 16:57:47,172 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-22 16:57:47,172 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-22 16:57:47,174 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/MasterData/data/master/store-tmp 2023-05-22 16:57:47,185 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:57:47,185 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-22 16:57:47,185 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:57:47,185 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:57:47,185 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-22 16:57:47,185 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:57:47,185 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:57:47,185 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-22 16:57:47,186 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/MasterData/WALs/jenkins-hbase4.apache.org,44291,1684774667074 2023-05-22 16:57:47,189 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44291%2C1684774667074, suffix=, logDir=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/MasterData/WALs/jenkins-hbase4.apache.org,44291,1684774667074, archiveDir=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/MasterData/oldWALs, maxLogs=10 2023-05-22 16:57:47,199 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/MasterData/WALs/jenkins-hbase4.apache.org,44291,1684774667074/jenkins-hbase4.apache.org%2C44291%2C1684774667074.1684774667190 2023-05-22 16:57:47,200 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34909,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK], DatanodeInfoWithStorage[127.0.0.1:46777,DS-6c375f34-cb98-4373-8d8a-593a8c80713b,DISK]] 2023-05-22 16:57:47,200 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-22 16:57:47,200 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:57:47,200 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:57:47,200 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:57:47,202 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:57:47,204 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-22 16:57:47,205 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-22 16:57:47,205 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:57:47,206 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:57:47,206 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:57:47,210 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:57:47,212 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-22 16:57:47,212 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=721910, jitterRate=-0.0820448100566864}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-22 16:57:47,212 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-22 16:57:47,213 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-22 16:57:47,214 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-22 16:57:47,214 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-22 16:57:47,214 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-22 16:57:47,214 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-22 16:57:47,215 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-22 16:57:47,215 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(95): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-22 16:57:47,215 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-22 16:57:47,216 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-22 16:57:47,227 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-22 16:57:47,227 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-22 16:57:47,228 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-22 16:57:47,228 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-22 16:57:47,228 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-22 16:57:47,231 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:57:47,232 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-22 16:57:47,232 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-22 16:57:47,233 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-22 16:57:47,234 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-22 16:57:47,234 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): regionserver:39651-0x10053d3c6300001, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-22 16:57:47,234 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:57:47,235 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,44291,1684774667074, sessionid=0x10053d3c6300000, setting cluster-up flag (Was=false) 2023-05-22 16:57:47,239 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:57:47,243 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-22 16:57:47,244 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44291,1684774667074 2023-05-22 16:57:47,248 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:57:47,252 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-22 16:57:47,253 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44291,1684774667074 2023-05-22 16:57:47,253 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/.hbase-snapshot/.tmp 2023-05-22 16:57:47,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-22 16:57:47,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-22 16:57:47,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-22 16:57:47,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-22 16:57:47,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-22 16:57:47,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-22 16:57:47,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:57:47,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-22 16:57:47,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:57:47,259 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1684774697259 2023-05-22 16:57:47,259 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-22 16:57:47,259 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-22 16:57:47,260 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-22 16:57:47,260 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-22 16:57:47,260 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-22 16:57:47,260 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-22 16:57:47,260 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-22 16:57:47,260 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-22 16:57:47,261 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-22 16:57:47,261 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-22 16:57:47,261 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-22 16:57:47,261 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-22 16:57:47,262 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-22 16:57:47,262 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-22 16:57:47,262 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1684774667262,5,FailOnTimeoutGroup] 2023-05-22 16:57:47,262 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1684774667262,5,FailOnTimeoutGroup] 2023-05-22 16:57:47,262 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-22 16:57:47,262 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-22 16:57:47,262 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-22 16:57:47,262 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-22 16:57:47,262 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-22 16:57:47,272 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-22 16:57:47,272 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-22 16:57:47,273 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f 2023-05-22 16:57:47,280 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:57:47,281 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-22 16:57:47,282 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/hbase/meta/1588230740/info 2023-05-22 16:57:47,283 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-22 16:57:47,283 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:57:47,283 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-22 16:57:47,284 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/hbase/meta/1588230740/rep_barrier 2023-05-22 16:57:47,285 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-22 16:57:47,285 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:57:47,285 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-22 16:57:47,286 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/hbase/meta/1588230740/table 2023-05-22 16:57:47,287 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-22 16:57:47,287 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:57:47,288 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/hbase/meta/1588230740 2023-05-22 16:57:47,289 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/hbase/meta/1588230740 2023-05-22 16:57:47,291 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-22 16:57:47,293 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-22 16:57:47,295 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-22 16:57:47,295 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=723983, jitterRate=-0.07940804958343506}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-22 16:57:47,295 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-22 16:57:47,295 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-22 16:57:47,295 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-22 16:57:47,295 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-22 16:57:47,296 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-22 16:57:47,296 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-22 16:57:47,296 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-22 16:57:47,296 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-22 16:57:47,297 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-22 16:57:47,297 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-22 16:57:47,297 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-22 16:57:47,299 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-22 16:57:47,300 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-22 16:57:47,329 INFO [RS:0;jenkins-hbase4:39651] regionserver.HRegionServer(951): ClusterId : fc37f0ca-e3d4-4269-987f-e5dd5a951fcb 2023-05-22 16:57:47,330 DEBUG [RS:0;jenkins-hbase4:39651] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-22 16:57:47,334 DEBUG [RS:0;jenkins-hbase4:39651] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-22 16:57:47,334 DEBUG [RS:0;jenkins-hbase4:39651] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-22 16:57:47,336 DEBUG [RS:0;jenkins-hbase4:39651] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-22 16:57:47,337 DEBUG [RS:0;jenkins-hbase4:39651] zookeeper.ReadOnlyZKClient(139): Connect 0x7418a973 to 127.0.0.1:62530 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-22 16:57:47,341 DEBUG [RS:0;jenkins-hbase4:39651] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@70272328, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-22 16:57:47,341 DEBUG [RS:0;jenkins-hbase4:39651] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3525a348, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-22 16:57:47,350 DEBUG [RS:0;jenkins-hbase4:39651] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:39651 2023-05-22 16:57:47,350 INFO [RS:0;jenkins-hbase4:39651] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-22 16:57:47,350 INFO [RS:0;jenkins-hbase4:39651] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-22 16:57:47,350 DEBUG [RS:0;jenkins-hbase4:39651] regionserver.HRegionServer(1022): About to register with Master. 2023-05-22 16:57:47,350 INFO [RS:0;jenkins-hbase4:39651] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,44291,1684774667074 with isa=jenkins-hbase4.apache.org/172.31.14.131:39651, startcode=1684774667115 2023-05-22 16:57:47,351 DEBUG [RS:0;jenkins-hbase4:39651] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-22 16:57:47,354 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48641, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-05-22 16:57:47,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44291] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39651,1684774667115 2023-05-22 16:57:47,355 DEBUG [RS:0;jenkins-hbase4:39651] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f 2023-05-22 16:57:47,355 DEBUG [RS:0;jenkins-hbase4:39651] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35761 2023-05-22 16:57:47,355 DEBUG [RS:0;jenkins-hbase4:39651] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-22 16:57:47,357 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-22 16:57:47,357 DEBUG [RS:0;jenkins-hbase4:39651] zookeeper.ZKUtil(162): regionserver:39651-0x10053d3c6300001, quorum=127.0.0.1:62530, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39651,1684774667115 2023-05-22 16:57:47,357 WARN [RS:0;jenkins-hbase4:39651] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-22 16:57:47,357 INFO [RS:0;jenkins-hbase4:39651] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-22 16:57:47,357 DEBUG [RS:0;jenkins-hbase4:39651] regionserver.HRegionServer(1946): logDir=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115 2023-05-22 16:57:47,358 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39651,1684774667115] 2023-05-22 16:57:47,361 DEBUG [RS:0;jenkins-hbase4:39651] zookeeper.ZKUtil(162): regionserver:39651-0x10053d3c6300001, quorum=127.0.0.1:62530, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39651,1684774667115 2023-05-22 16:57:47,362 DEBUG [RS:0;jenkins-hbase4:39651] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-22 16:57:47,362 INFO [RS:0;jenkins-hbase4:39651] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-22 16:57:47,364 INFO [RS:0;jenkins-hbase4:39651] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-22 16:57:47,365 INFO [RS:0;jenkins-hbase4:39651] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-22 16:57:47,365 INFO [RS:0;jenkins-hbase4:39651] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-22 16:57:47,365 INFO [RS:0;jenkins-hbase4:39651] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-22 16:57:47,366 INFO [RS:0;jenkins-hbase4:39651] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-22 16:57:47,367 DEBUG [RS:0;jenkins-hbase4:39651] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:57:47,367 DEBUG [RS:0;jenkins-hbase4:39651] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:57:47,367 DEBUG [RS:0;jenkins-hbase4:39651] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:57:47,367 DEBUG [RS:0;jenkins-hbase4:39651] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:57:47,367 DEBUG [RS:0;jenkins-hbase4:39651] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:57:47,367 DEBUG [RS:0;jenkins-hbase4:39651] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-22 16:57:47,367 DEBUG [RS:0;jenkins-hbase4:39651] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:57:47,367 DEBUG [RS:0;jenkins-hbase4:39651] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:57:47,368 DEBUG [RS:0;jenkins-hbase4:39651] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:57:47,368 DEBUG [RS:0;jenkins-hbase4:39651] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:57:47,368 INFO [RS:0;jenkins-hbase4:39651] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-22 16:57:47,369 INFO [RS:0;jenkins-hbase4:39651] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-22 16:57:47,369 INFO [RS:0;jenkins-hbase4:39651] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-22 16:57:47,386 INFO [RS:0;jenkins-hbase4:39651] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-22 16:57:47,386 INFO [RS:0;jenkins-hbase4:39651] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39651,1684774667115-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-22 16:57:47,401 INFO [RS:0;jenkins-hbase4:39651] regionserver.Replication(203): jenkins-hbase4.apache.org,39651,1684774667115 started 2023-05-22 16:57:47,401 INFO [RS:0;jenkins-hbase4:39651] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39651,1684774667115, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39651, sessionid=0x10053d3c6300001 2023-05-22 16:57:47,401 DEBUG [RS:0;jenkins-hbase4:39651] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-22 16:57:47,401 DEBUG [RS:0;jenkins-hbase4:39651] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39651,1684774667115 2023-05-22 16:57:47,401 DEBUG [RS:0;jenkins-hbase4:39651] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39651,1684774667115' 2023-05-22 16:57:47,401 DEBUG [RS:0;jenkins-hbase4:39651] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-22 16:57:47,402 DEBUG [RS:0;jenkins-hbase4:39651] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-22 16:57:47,402 DEBUG [RS:0;jenkins-hbase4:39651] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-22 16:57:47,402 DEBUG [RS:0;jenkins-hbase4:39651] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-22 16:57:47,402 DEBUG [RS:0;jenkins-hbase4:39651] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39651,1684774667115 2023-05-22 16:57:47,402 DEBUG [RS:0;jenkins-hbase4:39651] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39651,1684774667115' 2023-05-22 16:57:47,402 DEBUG [RS:0;jenkins-hbase4:39651] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-22 16:57:47,403 DEBUG [RS:0;jenkins-hbase4:39651] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-22 16:57:47,403 DEBUG [RS:0;jenkins-hbase4:39651] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-22 16:57:47,403 INFO [RS:0;jenkins-hbase4:39651] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-22 16:57:47,403 INFO [RS:0;jenkins-hbase4:39651] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-22 16:57:47,451 DEBUG [jenkins-hbase4:44291] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-22 16:57:47,452 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39651,1684774667115, state=OPENING 2023-05-22 16:57:47,454 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-22 16:57:47,455 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:57:47,456 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39651,1684774667115}] 2023-05-22 16:57:47,456 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-22 16:57:47,505 INFO [RS:0;jenkins-hbase4:39651] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39651%2C1684774667115, suffix=, logDir=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115, archiveDir=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/oldWALs, maxLogs=32 2023-05-22 16:57:47,516 INFO [RS:0;jenkins-hbase4:39651] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774667506 2023-05-22 16:57:47,516 DEBUG [RS:0;jenkins-hbase4:39651] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46777,DS-6c375f34-cb98-4373-8d8a-593a8c80713b,DISK], DatanodeInfoWithStorage[127.0.0.1:34909,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK]] 2023-05-22 16:57:47,610 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39651,1684774667115 2023-05-22 16:57:47,610 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-22 16:57:47,613 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60148, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-22 16:57:47,617 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-22 16:57:47,617 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-22 16:57:47,618 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39651%2C1684774667115.meta, suffix=.meta, logDir=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115, archiveDir=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/oldWALs, maxLogs=32 2023-05-22 16:57:47,629 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.meta.1684774667619.meta 2023-05-22 16:57:47,629 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34909,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK], DatanodeInfoWithStorage[127.0.0.1:46777,DS-6c375f34-cb98-4373-8d8a-593a8c80713b,DISK]] 2023-05-22 16:57:47,629 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-22 16:57:47,629 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-22 16:57:47,629 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-22 16:57:47,629 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-22 16:57:47,630 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-22 16:57:47,630 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:57:47,630 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-22 16:57:47,630 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-22 16:57:47,631 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-22 16:57:47,632 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/hbase/meta/1588230740/info 2023-05-22 16:57:47,632 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/hbase/meta/1588230740/info 2023-05-22 16:57:47,633 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-22 16:57:47,633 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:57:47,634 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-22 16:57:47,634 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/hbase/meta/1588230740/rep_barrier 2023-05-22 16:57:47,634 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/hbase/meta/1588230740/rep_barrier 2023-05-22 16:57:47,635 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-22 16:57:47,635 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:57:47,635 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-22 16:57:47,636 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/hbase/meta/1588230740/table 2023-05-22 16:57:47,636 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/hbase/meta/1588230740/table 2023-05-22 16:57:47,636 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-22 16:57:47,637 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:57:47,638 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/hbase/meta/1588230740 2023-05-22 16:57:47,638 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/hbase/meta/1588230740 2023-05-22 16:57:47,640 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-22 16:57:47,642 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-22 16:57:47,643 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=733717, jitterRate=-0.06703175604343414}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-22 16:57:47,643 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-22 16:57:47,645 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1684774667610 2023-05-22 16:57:47,648 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-22 16:57:47,648 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-22 16:57:47,649 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39651,1684774667115, state=OPEN 2023-05-22 16:57:47,651 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-22 16:57:47,651 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-22 16:57:47,653 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-22 16:57:47,653 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39651,1684774667115 in 195 msec 2023-05-22 16:57:47,656 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-22 16:57:47,656 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 356 msec 2023-05-22 16:57:47,658 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 402 msec 2023-05-22 16:57:47,658 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1684774667658, completionTime=-1 2023-05-22 16:57:47,658 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-22 16:57:47,658 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-22 16:57:47,660 DEBUG [hconnection-0x12671c76-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-22 16:57:47,662 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60162, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-22 16:57:47,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-22 16:57:47,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1684774727664 2023-05-22 16:57:47,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1684774787664 2023-05-22 16:57:47,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-05-22 16:57:47,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44291,1684774667074-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-22 16:57:47,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44291,1684774667074-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-22 16:57:47,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44291,1684774667074-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-22 16:57:47,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:44291, period=300000, unit=MILLISECONDS is enabled. 2023-05-22 16:57:47,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-22 16:57:47,673 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-22 16:57:47,673 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-22 16:57:47,674 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-22 16:57:47,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-22 16:57:47,676 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-22 16:57:47,677 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-22 16:57:47,679 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/.tmp/data/hbase/namespace/41d2520d95542d6d9d31ddf86bb334cb 2023-05-22 16:57:47,680 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/.tmp/data/hbase/namespace/41d2520d95542d6d9d31ddf86bb334cb empty. 2023-05-22 16:57:47,680 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/.tmp/data/hbase/namespace/41d2520d95542d6d9d31ddf86bb334cb 2023-05-22 16:57:47,680 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-22 16:57:47,691 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-22 16:57:47,692 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 41d2520d95542d6d9d31ddf86bb334cb, NAME => 'hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/.tmp 2023-05-22 16:57:47,699 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:57:47,699 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 41d2520d95542d6d9d31ddf86bb334cb, disabling compactions & flushes 2023-05-22 16:57:47,700 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb. 2023-05-22 16:57:47,700 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb. 2023-05-22 16:57:47,700 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb. after waiting 0 ms 2023-05-22 16:57:47,700 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb. 2023-05-22 16:57:47,700 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb. 2023-05-22 16:57:47,700 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 41d2520d95542d6d9d31ddf86bb334cb: 2023-05-22 16:57:47,702 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-22 16:57:47,703 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684774667703"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684774667703"}]},"ts":"1684774667703"} 2023-05-22 16:57:47,705 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-22 16:57:47,706 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-22 16:57:47,707 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684774667707"}]},"ts":"1684774667707"} 2023-05-22 16:57:47,708 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-22 16:57:47,717 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=41d2520d95542d6d9d31ddf86bb334cb, ASSIGN}] 2023-05-22 16:57:47,719 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=41d2520d95542d6d9d31ddf86bb334cb, ASSIGN 2023-05-22 16:57:47,720 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=41d2520d95542d6d9d31ddf86bb334cb, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39651,1684774667115; forceNewPlan=false, retain=false 2023-05-22 16:57:47,871 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=41d2520d95542d6d9d31ddf86bb334cb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39651,1684774667115 2023-05-22 16:57:47,872 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684774667871"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1684774667871"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684774667871"}]},"ts":"1684774667871"} 2023-05-22 16:57:47,874 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 41d2520d95542d6d9d31ddf86bb334cb, server=jenkins-hbase4.apache.org,39651,1684774667115}] 2023-05-22 16:57:48,030 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb. 2023-05-22 16:57:48,030 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 41d2520d95542d6d9d31ddf86bb334cb, NAME => 'hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb.', STARTKEY => '', ENDKEY => ''} 2023-05-22 16:57:48,030 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 41d2520d95542d6d9d31ddf86bb334cb 2023-05-22 16:57:48,031 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:57:48,031 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 41d2520d95542d6d9d31ddf86bb334cb 2023-05-22 16:57:48,031 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 41d2520d95542d6d9d31ddf86bb334cb 2023-05-22 16:57:48,032 INFO [StoreOpener-41d2520d95542d6d9d31ddf86bb334cb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 41d2520d95542d6d9d31ddf86bb334cb 2023-05-22 16:57:48,033 DEBUG [StoreOpener-41d2520d95542d6d9d31ddf86bb334cb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/hbase/namespace/41d2520d95542d6d9d31ddf86bb334cb/info 2023-05-22 16:57:48,033 DEBUG [StoreOpener-41d2520d95542d6d9d31ddf86bb334cb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/hbase/namespace/41d2520d95542d6d9d31ddf86bb334cb/info 2023-05-22 16:57:48,034 INFO [StoreOpener-41d2520d95542d6d9d31ddf86bb334cb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 41d2520d95542d6d9d31ddf86bb334cb columnFamilyName info 2023-05-22 16:57:48,034 INFO [StoreOpener-41d2520d95542d6d9d31ddf86bb334cb-1] regionserver.HStore(310): Store=41d2520d95542d6d9d31ddf86bb334cb/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:57:48,035 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/hbase/namespace/41d2520d95542d6d9d31ddf86bb334cb 2023-05-22 16:57:48,036 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/hbase/namespace/41d2520d95542d6d9d31ddf86bb334cb 2023-05-22 16:57:48,040 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 41d2520d95542d6d9d31ddf86bb334cb 2023-05-22 16:57:48,042 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/hbase/namespace/41d2520d95542d6d9d31ddf86bb334cb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-22 16:57:48,043 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 41d2520d95542d6d9d31ddf86bb334cb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=751249, jitterRate=-0.044738173484802246}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-22 16:57:48,043 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 41d2520d95542d6d9d31ddf86bb334cb: 2023-05-22 16:57:48,044 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb., pid=6, masterSystemTime=1684774668026 2023-05-22 16:57:48,047 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb. 2023-05-22 16:57:48,047 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb. 2023-05-22 16:57:48,048 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=41d2520d95542d6d9d31ddf86bb334cb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39651,1684774667115 2023-05-22 16:57:48,048 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684774668047"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1684774668047"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684774668047"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684774668047"}]},"ts":"1684774668047"} 2023-05-22 16:57:48,052 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-22 16:57:48,052 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 41d2520d95542d6d9d31ddf86bb334cb, server=jenkins-hbase4.apache.org,39651,1684774667115 in 175 msec 2023-05-22 16:57:48,055 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-22 16:57:48,055 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=41d2520d95542d6d9d31ddf86bb334cb, ASSIGN in 335 msec 2023-05-22 16:57:48,056 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-22 16:57:48,056 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684774668056"}]},"ts":"1684774668056"} 2023-05-22 16:57:48,057 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-22 16:57:48,060 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-22 16:57:48,062 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 387 msec 2023-05-22 16:57:48,076 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-22 16:57:48,077 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-22 16:57:48,077 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:57:48,080 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-22 16:57:48,090 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-22 16:57:48,098 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 16 msec 2023-05-22 16:57:48,102 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-22 16:57:48,112 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-22 16:57:48,115 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-05-22 16:57:48,128 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-22 16:57:48,130 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-22 16:57:48,130 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.995sec 2023-05-22 16:57:48,130 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-22 16:57:48,130 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-22 16:57:48,131 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-22 16:57:48,131 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44291,1684774667074-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-22 16:57:48,131 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44291,1684774667074-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-22 16:57:48,132 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-22 16:57:48,230 DEBUG [Listener at localhost/42493] zookeeper.ReadOnlyZKClient(139): Connect 0x30d99e57 to 127.0.0.1:62530 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-22 16:57:48,234 DEBUG [Listener at localhost/42493] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@31ca48bb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-22 16:57:48,235 DEBUG [hconnection-0x48da1e51-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-22 16:57:48,237 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60172, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-22 16:57:48,239 INFO [Listener at localhost/42493] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,44291,1684774667074 2023-05-22 16:57:48,239 INFO [Listener at localhost/42493] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:57:48,244 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-22 16:57:48,244 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:57:48,245 INFO [Listener at localhost/42493] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-22 16:57:48,245 INFO [Listener at localhost/42493] wal.TestLogRolling(429): Starting testLogRollOnPipelineRestart 2023-05-22 16:57:48,245 INFO [Listener at localhost/42493] wal.TestLogRolling(432): Replication=2 2023-05-22 16:57:48,246 DEBUG [Listener at localhost/42493] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-22 16:57:48,249 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39514, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-22 16:57:48,251 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44291] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-22 16:57:48,251 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44291] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-22 16:57:48,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44291] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-22 16:57:48,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44291] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart 2023-05-22 16:57:48,255 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_PRE_OPERATION 2023-05-22 16:57:48,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44291] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnPipelineRestart" procId is: 9 2023-05-22 16:57:48,256 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-22 16:57:48,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44291] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-22 16:57:48,258 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/9117e5e102caa78c7adf0f27314fd8c9 2023-05-22 16:57:48,259 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/9117e5e102caa78c7adf0f27314fd8c9 empty. 2023-05-22 16:57:48,259 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/9117e5e102caa78c7adf0f27314fd8c9 2023-05-22 16:57:48,259 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnPipelineRestart regions 2023-05-22 16:57:48,271 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/.tabledesc/.tableinfo.0000000001 2023-05-22 16:57:48,273 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9117e5e102caa78c7adf0f27314fd8c9, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/.tmp 2023-05-22 16:57:48,286 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:57:48,286 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1604): Closing 9117e5e102caa78c7adf0f27314fd8c9, disabling compactions & flushes 2023-05-22 16:57:48,286 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9. 2023-05-22 16:57:48,286 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9. 2023-05-22 16:57:48,286 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9. after waiting 0 ms 2023-05-22 16:57:48,286 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9. 2023-05-22 16:57:48,286 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9. 2023-05-22 16:57:48,286 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1558): Region close journal for 9117e5e102caa78c7adf0f27314fd8c9: 2023-05-22 16:57:48,289 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ADD_TO_META 2023-05-22 16:57:48,290 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1684774668290"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684774668290"}]},"ts":"1684774668290"} 2023-05-22 16:57:48,292 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-22 16:57:48,293 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-22 16:57:48,293 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684774668293"}]},"ts":"1684774668293"} 2023-05-22 16:57:48,295 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLING in hbase:meta 2023-05-22 16:57:48,298 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=9117e5e102caa78c7adf0f27314fd8c9, ASSIGN}] 2023-05-22 16:57:48,300 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=9117e5e102caa78c7adf0f27314fd8c9, ASSIGN 2023-05-22 16:57:48,301 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=9117e5e102caa78c7adf0f27314fd8c9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39651,1684774667115; forceNewPlan=false, retain=false 2023-05-22 16:57:48,452 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=9117e5e102caa78c7adf0f27314fd8c9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39651,1684774667115 2023-05-22 16:57:48,452 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1684774668452"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1684774668452"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684774668452"}]},"ts":"1684774668452"} 2023-05-22 16:57:48,455 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 9117e5e102caa78c7adf0f27314fd8c9, server=jenkins-hbase4.apache.org,39651,1684774667115}] 2023-05-22 16:57:48,612 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9. 2023-05-22 16:57:48,612 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9117e5e102caa78c7adf0f27314fd8c9, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9.', STARTKEY => '', ENDKEY => ''} 2023-05-22 16:57:48,612 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnPipelineRestart 9117e5e102caa78c7adf0f27314fd8c9 2023-05-22 16:57:48,612 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:57:48,612 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9117e5e102caa78c7adf0f27314fd8c9 2023-05-22 16:57:48,612 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9117e5e102caa78c7adf0f27314fd8c9 2023-05-22 16:57:48,614 INFO [StoreOpener-9117e5e102caa78c7adf0f27314fd8c9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 9117e5e102caa78c7adf0f27314fd8c9 2023-05-22 16:57:48,615 DEBUG [StoreOpener-9117e5e102caa78c7adf0f27314fd8c9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/default/TestLogRolling-testLogRollOnPipelineRestart/9117e5e102caa78c7adf0f27314fd8c9/info 2023-05-22 16:57:48,615 DEBUG [StoreOpener-9117e5e102caa78c7adf0f27314fd8c9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/default/TestLogRolling-testLogRollOnPipelineRestart/9117e5e102caa78c7adf0f27314fd8c9/info 2023-05-22 16:57:48,616 INFO [StoreOpener-9117e5e102caa78c7adf0f27314fd8c9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9117e5e102caa78c7adf0f27314fd8c9 columnFamilyName info 2023-05-22 16:57:48,616 INFO [StoreOpener-9117e5e102caa78c7adf0f27314fd8c9-1] regionserver.HStore(310): Store=9117e5e102caa78c7adf0f27314fd8c9/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:57:48,617 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/default/TestLogRolling-testLogRollOnPipelineRestart/9117e5e102caa78c7adf0f27314fd8c9 2023-05-22 16:57:48,618 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/default/TestLogRolling-testLogRollOnPipelineRestart/9117e5e102caa78c7adf0f27314fd8c9 2023-05-22 16:57:48,669 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9117e5e102caa78c7adf0f27314fd8c9 2023-05-22 16:57:48,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/data/default/TestLogRolling-testLogRollOnPipelineRestart/9117e5e102caa78c7adf0f27314fd8c9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-22 16:57:48,673 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9117e5e102caa78c7adf0f27314fd8c9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=862411, jitterRate=0.09661275148391724}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-22 16:57:48,673 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9117e5e102caa78c7adf0f27314fd8c9: 2023-05-22 16:57:48,674 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9., pid=11, masterSystemTime=1684774668608 2023-05-22 16:57:48,676 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9. 2023-05-22 16:57:48,677 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9. 2023-05-22 16:57:48,677 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=9117e5e102caa78c7adf0f27314fd8c9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39651,1684774667115 2023-05-22 16:57:48,677 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1684774668677"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1684774668677"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684774668677"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684774668677"}]},"ts":"1684774668677"} 2023-05-22 16:57:48,681 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-22 16:57:48,681 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 9117e5e102caa78c7adf0f27314fd8c9, server=jenkins-hbase4.apache.org,39651,1684774667115 in 224 msec 2023-05-22 16:57:48,684 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-22 16:57:48,684 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=9117e5e102caa78c7adf0f27314fd8c9, ASSIGN in 383 msec 2023-05-22 16:57:48,684 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-22 16:57:48,685 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684774668684"}]},"ts":"1684774668684"} 2023-05-22 16:57:48,686 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLED in hbase:meta 2023-05-22 16:57:48,688 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_POST_OPERATION 2023-05-22 16:57:48,690 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart in 437 msec 2023-05-22 16:57:51,016 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-22 16:57:53,362 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-22 16:57:53,363 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnPipelineRestart' 2023-05-22 16:57:58,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44291] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-22 16:57:58,258 INFO [Listener at localhost/42493] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnPipelineRestart, procId: 9 completed 2023-05-22 16:57:58,260 DEBUG [Listener at localhost/42493] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnPipelineRestart 2023-05-22 16:57:58,260 DEBUG [Listener at localhost/42493] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9. 2023-05-22 16:58:00,266 INFO [Listener at localhost/42493] wal.TestLogRolling(469): log.getCurrentFileName()): hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774667506 2023-05-22 16:58:00,267 WARN [Listener at localhost/42493] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-22 16:58:00,268 WARN [ResponseProcessor for block BP-1053250174-172.31.14.131-1684774666509:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1053250174-172.31.14.131-1684774666509:blk_1073741832_1008 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-22 16:58:00,269 WARN [ResponseProcessor for block BP-1053250174-172.31.14.131-1684774666509:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1053250174-172.31.14.131-1684774666509:blk_1073741833_1009 java.io.IOException: Bad response ERROR for BP-1053250174-172.31.14.131-1684774666509:blk_1073741833_1009 from datanode DatanodeInfoWithStorage[127.0.0.1:46777,DS-6c375f34-cb98-4373-8d8a-593a8c80713b,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-22 16:58:00,270 WARN [DataStreamer for file /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774667506 block BP-1053250174-172.31.14.131-1684774666509:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-1053250174-172.31.14.131-1684774666509:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:46777,DS-6c375f34-cb98-4373-8d8a-593a8c80713b,DISK], DatanodeInfoWithStorage[127.0.0.1:34909,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:46777,DS-6c375f34-cb98-4373-8d8a-593a8c80713b,DISK]) is bad. 2023-05-22 16:58:00,270 WARN [DataStreamer for file /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.meta.1684774667619.meta block BP-1053250174-172.31.14.131-1684774666509:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-1053250174-172.31.14.131-1684774666509:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:34909,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK], DatanodeInfoWithStorage[127.0.0.1:46777,DS-6c375f34-cb98-4373-8d8a-593a8c80713b,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:46777,DS-6c375f34-cb98-4373-8d8a-593a8c80713b,DISK]) is bad. 2023-05-22 16:58:00,270 WARN [PacketResponder: BP-1053250174-172.31.14.131-1684774666509:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:46777]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:58:00,269 WARN [ResponseProcessor for block BP-1053250174-172.31.14.131-1684774666509:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1053250174-172.31.14.131-1684774666509:blk_1073741829_1005 java.io.IOException: Bad response ERROR for BP-1053250174-172.31.14.131-1684774666509:blk_1073741829_1005 from datanode DatanodeInfoWithStorage[127.0.0.1:46777,DS-6c375f34-cb98-4373-8d8a-593a8c80713b,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-22 16:58:00,271 WARN [DataStreamer for file /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/MasterData/WALs/jenkins-hbase4.apache.org,44291,1684774667074/jenkins-hbase4.apache.org%2C44291%2C1684774667074.1684774667190 block BP-1053250174-172.31.14.131-1684774666509:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-1053250174-172.31.14.131-1684774666509:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:34909,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK], DatanodeInfoWithStorage[127.0.0.1:46777,DS-6c375f34-cb98-4373-8d8a-593a8c80713b,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:46777,DS-6c375f34-cb98-4373-8d8a-593a8c80713b,DISK]) is bad. 2023-05-22 16:58:00,271 WARN [PacketResponder: BP-1053250174-172.31.14.131-1684774666509:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:46777]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:58:00,271 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-840092962_17 at /127.0.0.1:55612 [Receiving block BP-1053250174-172.31.14.131-1684774666509:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:34909:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:55612 dst: /127.0.0.1:34909 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:58:00,274 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1555909029_17 at /127.0.0.1:55572 [Receiving block BP-1053250174-172.31.14.131-1684774666509:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:34909:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:55572 dst: /127.0.0.1:34909 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:58:00,277 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-840092962_17 at /127.0.0.1:55604 [Receiving block BP-1053250174-172.31.14.131-1684774666509:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:34909:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:55604 dst: /127.0.0.1:34909 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:34909 remote=/127.0.0.1:55604]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:58:00,278 WARN [PacketResponder: BP-1053250174-172.31.14.131-1684774666509:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:34909]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:58:00,278 INFO [Listener at localhost/42493] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 16:58:00,280 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-840092962_17 at /127.0.0.1:44462 [Receiving block BP-1053250174-172.31.14.131-1684774666509:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:46777:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:44462 dst: /127.0.0.1:46777 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:58:00,284 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-840092962_17 at /127.0.0.1:44470 [Receiving block BP-1053250174-172.31.14.131-1684774666509:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:46777:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:44470 dst: /127.0.0.1:46777 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:58:00,284 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1555909029_17 at /127.0.0.1:44434 [Receiving block BP-1053250174-172.31.14.131-1684774666509:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:46777:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:44434 dst: /127.0.0.1:46777 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:58:00,285 WARN [BP-1053250174-172.31.14.131-1684774666509 heartbeating to localhost/127.0.0.1:35761] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-22 16:58:00,287 WARN [BP-1053250174-172.31.14.131-1684774666509 heartbeating to localhost/127.0.0.1:35761] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1053250174-172.31.14.131-1684774666509 (Datanode Uuid 672a4885-5bed-4f34-bcc8-7168b7ce02b7) service to localhost/127.0.0.1:35761 2023-05-22 16:58:00,287 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/cluster_b48852c6-ec88-6722-faa9-8789e7cb8f82/dfs/data/data3/current/BP-1053250174-172.31.14.131-1684774666509] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:58:00,288 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/cluster_b48852c6-ec88-6722-faa9-8789e7cb8f82/dfs/data/data4/current/BP-1053250174-172.31.14.131-1684774666509] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:58:00,294 WARN [Listener at localhost/42493] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-22 16:58:00,297 WARN [Listener at localhost/42493] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 16:58:00,298 INFO [Listener at localhost/42493] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 16:58:00,302 INFO [Listener at localhost/42493] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/java.io.tmpdir/Jetty_localhost_38187_datanode____.ancnxq/webapp 2023-05-22 16:58:00,392 INFO [Listener at localhost/42493] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38187 2023-05-22 16:58:00,399 WARN [Listener at localhost/36925] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 16:58:00,404 WARN [Listener at localhost/36925] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-22 16:58:00,405 WARN [ResponseProcessor for block BP-1053250174-172.31.14.131-1684774666509:blk_1073741833_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1053250174-172.31.14.131-1684774666509:blk_1073741833_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-22 16:58:00,405 WARN [ResponseProcessor for block BP-1053250174-172.31.14.131-1684774666509:blk_1073741829_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1053250174-172.31.14.131-1684774666509:blk_1073741829_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-22 16:58:00,405 WARN [ResponseProcessor for block BP-1053250174-172.31.14.131-1684774666509:blk_1073741832_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1053250174-172.31.14.131-1684774666509:blk_1073741832_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-22 16:58:00,414 INFO [Listener at localhost/36925] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 16:58:00,468 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x24acbd2fd52a5ee7: Processing first storage report for DS-6c375f34-cb98-4373-8d8a-593a8c80713b from datanode 672a4885-5bed-4f34-bcc8-7168b7ce02b7 2023-05-22 16:58:00,469 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x24acbd2fd52a5ee7: from storage DS-6c375f34-cb98-4373-8d8a-593a8c80713b node DatanodeRegistration(127.0.0.1:37743, datanodeUuid=672a4885-5bed-4f34-bcc8-7168b7ce02b7, infoPort=35917, infoSecurePort=0, ipcPort=36925, storageInfo=lv=-57;cid=testClusterID;nsid=994457562;c=1684774666509), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:58:00,469 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x24acbd2fd52a5ee7: Processing first storage report for DS-a1f7cf11-bbe5-40b8-a689-a773eda3cc0a from datanode 672a4885-5bed-4f34-bcc8-7168b7ce02b7 2023-05-22 16:58:00,469 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x24acbd2fd52a5ee7: from storage DS-a1f7cf11-bbe5-40b8-a689-a773eda3cc0a node DatanodeRegistration(127.0.0.1:37743, datanodeUuid=672a4885-5bed-4f34-bcc8-7168b7ce02b7, infoPort=35917, infoSecurePort=0, ipcPort=36925, storageInfo=lv=-57;cid=testClusterID;nsid=994457562;c=1684774666509), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:58:00,517 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-840092962_17 at /127.0.0.1:37970 [Receiving block BP-1053250174-172.31.14.131-1684774666509:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:34909:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37970 dst: /127.0.0.1:34909 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:58:00,518 WARN [BP-1053250174-172.31.14.131-1684774666509 heartbeating to localhost/127.0.0.1:35761] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-22 16:58:00,517 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1555909029_17 at /127.0.0.1:37972 [Receiving block BP-1053250174-172.31.14.131-1684774666509:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:34909:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37972 dst: /127.0.0.1:34909 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:58:00,517 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-840092962_17 at /127.0.0.1:37986 [Receiving block BP-1053250174-172.31.14.131-1684774666509:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:34909:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37986 dst: /127.0.0.1:34909 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:58:00,518 WARN [BP-1053250174-172.31.14.131-1684774666509 heartbeating to localhost/127.0.0.1:35761] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1053250174-172.31.14.131-1684774666509 (Datanode Uuid 118b38cc-8262-4d98-a666-1a9802e7909e) service to localhost/127.0.0.1:35761 2023-05-22 16:58:00,521 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/cluster_b48852c6-ec88-6722-faa9-8789e7cb8f82/dfs/data/data1/current/BP-1053250174-172.31.14.131-1684774666509] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:58:00,521 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/cluster_b48852c6-ec88-6722-faa9-8789e7cb8f82/dfs/data/data2/current/BP-1053250174-172.31.14.131-1684774666509] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:58:00,528 WARN [Listener at localhost/36925] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-22 16:58:00,530 WARN [Listener at localhost/36925] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 16:58:00,532 INFO [Listener at localhost/36925] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 16:58:00,536 INFO [Listener at localhost/36925] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/java.io.tmpdir/Jetty_localhost_43611_datanode____.hvetuq/webapp 2023-05-22 16:58:00,628 INFO [Listener at localhost/36925] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43611 2023-05-22 16:58:00,634 WARN [Listener at localhost/33751] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 16:58:00,699 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe893806df757256f: Processing first storage report for DS-2d49f7ba-7197-4f3b-939d-e193de5ba405 from datanode 118b38cc-8262-4d98-a666-1a9802e7909e 2023-05-22 16:58:00,699 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe893806df757256f: from storage DS-2d49f7ba-7197-4f3b-939d-e193de5ba405 node DatanodeRegistration(127.0.0.1:34735, datanodeUuid=118b38cc-8262-4d98-a666-1a9802e7909e, infoPort=37835, infoSecurePort=0, ipcPort=33751, storageInfo=lv=-57;cid=testClusterID;nsid=994457562;c=1684774666509), blocks: 6, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-22 16:58:00,700 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe893806df757256f: Processing first storage report for DS-75ea55fd-3950-4a3c-9c7e-b45418924534 from datanode 118b38cc-8262-4d98-a666-1a9802e7909e 2023-05-22 16:58:00,700 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe893806df757256f: from storage DS-75ea55fd-3950-4a3c-9c7e-b45418924534 node DatanodeRegistration(127.0.0.1:34735, datanodeUuid=118b38cc-8262-4d98-a666-1a9802e7909e, infoPort=37835, infoSecurePort=0, ipcPort=33751, storageInfo=lv=-57;cid=testClusterID;nsid=994457562;c=1684774666509), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:58:01,640 INFO [Listener at localhost/33751] wal.TestLogRolling(481): Data Nodes restarted 2023-05-22 16:58:01,642 INFO [Listener at localhost/33751] wal.AbstractTestLogRolling(233): Validated row row1002 2023-05-22 16:58:01,643 WARN [RS:0;jenkins-hbase4:39651.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=5, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34909,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:01,643 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C39651%2C1684774667115:(num 1684774667506) roll requested 2023-05-22 16:58:01,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39651] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34909,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:01,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39651] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:60172 deadline: 1684774691642, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-05-22 16:58:01,651 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774667506 newFile=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774681643 2023-05-22 16:58:01,651 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-05-22 16:58:01,651 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774667506 with entries=5, filesize=2.11 KB; new WAL /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774681643 2023-05-22 16:58:01,651 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37743,DS-6c375f34-cb98-4373-8d8a-593a8c80713b,DISK], DatanodeInfoWithStorage[127.0.0.1:34735,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK]] 2023-05-22 16:58:01,651 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774667506 is not closed yet, will try archiving it next time 2023-05-22 16:58:01,651 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34909,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:01,652 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774667506; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34909,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:13,751 INFO [Listener at localhost/33751] wal.AbstractTestLogRolling(233): Validated row row1003 2023-05-22 16:58:15,753 WARN [Listener at localhost/33751] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-22 16:58:15,755 WARN [ResponseProcessor for block BP-1053250174-172.31.14.131-1684774666509:blk_1073741838_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1053250174-172.31.14.131-1684774666509:blk_1073741838_1017 java.io.IOException: Bad response ERROR for BP-1053250174-172.31.14.131-1684774666509:blk_1073741838_1017 from datanode DatanodeInfoWithStorage[127.0.0.1:34735,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-22 16:58:15,756 WARN [DataStreamer for file /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774681643 block BP-1053250174-172.31.14.131-1684774666509:blk_1073741838_1017] hdfs.DataStreamer(1548): Error Recovery for BP-1053250174-172.31.14.131-1684774666509:blk_1073741838_1017 in pipeline [DatanodeInfoWithStorage[127.0.0.1:37743,DS-6c375f34-cb98-4373-8d8a-593a8c80713b,DISK], DatanodeInfoWithStorage[127.0.0.1:34735,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:34735,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK]) is bad. 2023-05-22 16:58:15,756 WARN [PacketResponder: BP-1053250174-172.31.14.131-1684774666509:blk_1073741838_1017, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:34735]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:58:15,756 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-840092962_17 at /127.0.0.1:34170 [Receiving block BP-1053250174-172.31.14.131-1684774666509:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:37743:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34170 dst: /127.0.0.1:37743 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:58:15,759 INFO [Listener at localhost/33751] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 16:58:15,862 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-840092962_17 at /127.0.0.1:41590 [Receiving block BP-1053250174-172.31.14.131-1684774666509:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:34735:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:41590 dst: /127.0.0.1:34735 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:58:15,864 WARN [BP-1053250174-172.31.14.131-1684774666509 heartbeating to localhost/127.0.0.1:35761] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-22 16:58:15,864 WARN [BP-1053250174-172.31.14.131-1684774666509 heartbeating to localhost/127.0.0.1:35761] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1053250174-172.31.14.131-1684774666509 (Datanode Uuid 118b38cc-8262-4d98-a666-1a9802e7909e) service to localhost/127.0.0.1:35761 2023-05-22 16:58:15,864 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/cluster_b48852c6-ec88-6722-faa9-8789e7cb8f82/dfs/data/data1/current/BP-1053250174-172.31.14.131-1684774666509] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:58:15,865 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/cluster_b48852c6-ec88-6722-faa9-8789e7cb8f82/dfs/data/data2/current/BP-1053250174-172.31.14.131-1684774666509] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:58:15,871 WARN [Listener at localhost/33751] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-22 16:58:15,873 WARN [Listener at localhost/33751] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 16:58:15,874 INFO [Listener at localhost/33751] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 16:58:15,879 INFO [Listener at localhost/33751] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/java.io.tmpdir/Jetty_localhost_37803_datanode____.pi8b42/webapp 2023-05-22 16:58:15,970 INFO [Listener at localhost/33751] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37803 2023-05-22 16:58:15,977 WARN [Listener at localhost/39687] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 16:58:15,980 WARN [Listener at localhost/39687] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-22 16:58:15,980 WARN [ResponseProcessor for block BP-1053250174-172.31.14.131-1684774666509:blk_1073741838_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1053250174-172.31.14.131-1684774666509:blk_1073741838_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-22 16:58:15,985 INFO [Listener at localhost/39687] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 16:58:16,045 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8bf700f54920be88: Processing first storage report for DS-2d49f7ba-7197-4f3b-939d-e193de5ba405 from datanode 118b38cc-8262-4d98-a666-1a9802e7909e 2023-05-22 16:58:16,045 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8bf700f54920be88: from storage DS-2d49f7ba-7197-4f3b-939d-e193de5ba405 node DatanodeRegistration(127.0.0.1:34587, datanodeUuid=118b38cc-8262-4d98-a666-1a9802e7909e, infoPort=38515, infoSecurePort=0, ipcPort=39687, storageInfo=lv=-57;cid=testClusterID;nsid=994457562;c=1684774666509), blocks: 6, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-22 16:58:16,046 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8bf700f54920be88: Processing first storage report for DS-75ea55fd-3950-4a3c-9c7e-b45418924534 from datanode 118b38cc-8262-4d98-a666-1a9802e7909e 2023-05-22 16:58:16,046 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8bf700f54920be88: from storage DS-75ea55fd-3950-4a3c-9c7e-b45418924534 node DatanodeRegistration(127.0.0.1:34587, datanodeUuid=118b38cc-8262-4d98-a666-1a9802e7909e, infoPort=38515, infoSecurePort=0, ipcPort=39687, storageInfo=lv=-57;cid=testClusterID;nsid=994457562;c=1684774666509), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:58:16,089 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-840092962_17 at /127.0.0.1:53594 [Receiving block BP-1053250174-172.31.14.131-1684774666509:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:37743:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:53594 dst: /127.0.0.1:37743 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:58:16,091 WARN [BP-1053250174-172.31.14.131-1684774666509 heartbeating to localhost/127.0.0.1:35761] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-22 16:58:16,091 WARN [BP-1053250174-172.31.14.131-1684774666509 heartbeating to localhost/127.0.0.1:35761] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1053250174-172.31.14.131-1684774666509 (Datanode Uuid 672a4885-5bed-4f34-bcc8-7168b7ce02b7) service to localhost/127.0.0.1:35761 2023-05-22 16:58:16,091 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/cluster_b48852c6-ec88-6722-faa9-8789e7cb8f82/dfs/data/data3/current/BP-1053250174-172.31.14.131-1684774666509] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:58:16,092 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/cluster_b48852c6-ec88-6722-faa9-8789e7cb8f82/dfs/data/data4/current/BP-1053250174-172.31.14.131-1684774666509] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:58:16,099 WARN [Listener at localhost/39687] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-22 16:58:16,101 WARN [Listener at localhost/39687] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 16:58:16,102 INFO [Listener at localhost/39687] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 16:58:16,106 INFO [Listener at localhost/39687] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/java.io.tmpdir/Jetty_localhost_38507_datanode____dwmi1q/webapp 2023-05-22 16:58:16,195 INFO [Listener at localhost/39687] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38507 2023-05-22 16:58:16,201 WARN [Listener at localhost/34381] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 16:58:16,265 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa4ac9f7bf6456a98: Processing first storage report for DS-6c375f34-cb98-4373-8d8a-593a8c80713b from datanode 672a4885-5bed-4f34-bcc8-7168b7ce02b7 2023-05-22 16:58:16,265 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa4ac9f7bf6456a98: from storage DS-6c375f34-cb98-4373-8d8a-593a8c80713b node DatanodeRegistration(127.0.0.1:43269, datanodeUuid=672a4885-5bed-4f34-bcc8-7168b7ce02b7, infoPort=37991, infoSecurePort=0, ipcPort=34381, storageInfo=lv=-57;cid=testClusterID;nsid=994457562;c=1684774666509), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:58:16,265 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa4ac9f7bf6456a98: Processing first storage report for DS-a1f7cf11-bbe5-40b8-a689-a773eda3cc0a from datanode 672a4885-5bed-4f34-bcc8-7168b7ce02b7 2023-05-22 16:58:16,266 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa4ac9f7bf6456a98: from storage DS-a1f7cf11-bbe5-40b8-a689-a773eda3cc0a node DatanodeRegistration(127.0.0.1:43269, datanodeUuid=672a4885-5bed-4f34-bcc8-7168b7ce02b7, infoPort=37991, infoSecurePort=0, ipcPort=34381, storageInfo=lv=-57;cid=testClusterID;nsid=994457562;c=1684774666509), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:58:17,205 INFO [Listener at localhost/34381] wal.TestLogRolling(498): Data Nodes restarted 2023-05-22 16:58:17,207 INFO [Listener at localhost/34381] wal.AbstractTestLogRolling(233): Validated row row1004 2023-05-22 16:58:17,207 WARN [RS:0;jenkins-hbase4:39651.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=8, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:37743,DS-6c375f34-cb98-4373-8d8a-593a8c80713b,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:17,208 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C39651%2C1684774667115:(num 1684774681643) roll requested 2023-05-22 16:58:17,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39651] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:37743,DS-6c375f34-cb98-4373-8d8a-593a8c80713b,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:17,209 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39651] ipc.CallRunner(144): callId: 18 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:60172 deadline: 1684774707207, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-05-22 16:58:17,216 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774681643 newFile=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774697208 2023-05-22 16:58:17,216 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-05-22 16:58:17,216 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774681643 with entries=2, filesize=2.37 KB; new WAL /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774697208 2023-05-22 16:58:17,217 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:37743,DS-6c375f34-cb98-4373-8d8a-593a8c80713b,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:17,217 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34587,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK], DatanodeInfoWithStorage[127.0.0.1:43269,DS-6c375f34-cb98-4373-8d8a-593a8c80713b,DISK]] 2023-05-22 16:58:17,217 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774681643; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:37743,DS-6c375f34-cb98-4373-8d8a-593a8c80713b,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:17,217 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774681643 is not closed yet, will try archiving it next time 2023-05-22 16:58:17,260 WARN [master/jenkins-hbase4:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34909,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:17,260 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C44291%2C1684774667074:(num 1684774667190) roll requested 2023-05-22 16:58:17,260 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34909,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:17,261 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34909,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:17,268 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-05-22 16:58:17,268 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/MasterData/WALs/jenkins-hbase4.apache.org,44291,1684774667074/jenkins-hbase4.apache.org%2C44291%2C1684774667074.1684774667190 with entries=88, filesize=43.78 KB; new WAL /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/MasterData/WALs/jenkins-hbase4.apache.org,44291,1684774667074/jenkins-hbase4.apache.org%2C44291%2C1684774667074.1684774697260 2023-05-22 16:58:17,269 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43269,DS-6c375f34-cb98-4373-8d8a-593a8c80713b,DISK], DatanodeInfoWithStorage[127.0.0.1:34587,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK]] 2023-05-22 16:58:17,269 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/MasterData/WALs/jenkins-hbase4.apache.org,44291,1684774667074/jenkins-hbase4.apache.org%2C44291%2C1684774667074.1684774667190 is not closed yet, will try archiving it next time 2023-05-22 16:58:17,269 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34909,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:17,269 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/MasterData/WALs/jenkins-hbase4.apache.org,44291,1684774667074/jenkins-hbase4.apache.org%2C44291%2C1684774667074.1684774667190; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34909,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:29,302 DEBUG [Listener at localhost/34381] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774697208 newFile=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774709293 2023-05-22 16:58:29,304 INFO [Listener at localhost/34381] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774697208 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774709293 2023-05-22 16:58:29,307 DEBUG [Listener at localhost/34381] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34587,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK], DatanodeInfoWithStorage[127.0.0.1:43269,DS-6c375f34-cb98-4373-8d8a-593a8c80713b,DISK]] 2023-05-22 16:58:29,307 DEBUG [Listener at localhost/34381] wal.AbstractFSWAL(716): hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774697208 is not closed yet, will try archiving it next time 2023-05-22 16:58:29,308 DEBUG [Listener at localhost/34381] wal.TestLogRolling(512): recovering lease for hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774667506 2023-05-22 16:58:29,309 INFO [Listener at localhost/34381] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774667506 2023-05-22 16:58:29,311 WARN [IPC Server handler 3 on default port 35761] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774667506 has not been closed. Lease recovery is in progress. RecoveryId = 1022 for block blk_1073741832_1014 2023-05-22 16:58:29,314 INFO [Listener at localhost/34381] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774667506 after 5ms 2023-05-22 16:58:30,289 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@679b768b] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-1053250174-172.31.14.131-1684774666509:blk_1073741832_1014, datanode=DatanodeInfoWithStorage[127.0.0.1:43269,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741832_1014, replica=ReplicaWaitingToBeRecovered, blk_1073741832_1008, RWR getNumBytes() = 2160 getBytesOnDisk() = 2160 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/cluster_b48852c6-ec88-6722-faa9-8789e7cb8f82/dfs/data/data4/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/cluster_b48852c6-ec88-6722-faa9-8789e7cb8f82/dfs/data/data4/current/BP-1053250174-172.31.14.131-1684774666509/current/rbw/blk_1073741832 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:58:33,314 INFO [Listener at localhost/34381] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774667506 after 4005ms 2023-05-22 16:58:33,315 DEBUG [Listener at localhost/34381] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774667506 2023-05-22 16:58:33,324 DEBUG [Listener at localhost/34381] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1684774668043/Put/vlen=175/seqid=0] 2023-05-22 16:58:33,324 DEBUG [Listener at localhost/34381] wal.TestLogRolling(522): #4: [default/info:d/1684774668085/Put/vlen=9/seqid=0] 2023-05-22 16:58:33,324 DEBUG [Listener at localhost/34381] wal.TestLogRolling(522): #5: [hbase/info:d/1684774668109/Put/vlen=7/seqid=0] 2023-05-22 16:58:33,324 DEBUG [Listener at localhost/34381] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1684774668673/Put/vlen=231/seqid=0] 2023-05-22 16:58:33,324 DEBUG [Listener at localhost/34381] wal.TestLogRolling(522): #4: [row1002/info:/1684774678264/Put/vlen=1045/seqid=0] 2023-05-22 16:58:33,324 DEBUG [Listener at localhost/34381] wal.ProtobufLogReader(420): EOF at position 2160 2023-05-22 16:58:33,324 DEBUG [Listener at localhost/34381] wal.TestLogRolling(512): recovering lease for hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774681643 2023-05-22 16:58:33,324 INFO [Listener at localhost/34381] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774681643 2023-05-22 16:58:33,325 WARN [IPC Server handler 2 on default port 35761] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774681643 has not been closed. Lease recovery is in progress. RecoveryId = 1023 for block blk_1073741838_1018 2023-05-22 16:58:33,325 INFO [Listener at localhost/34381] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774681643 after 0ms 2023-05-22 16:58:34,269 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@51d354ea] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-1053250174-172.31.14.131-1684774666509:blk_1073741838_1018, datanode=DatanodeInfoWithStorage[127.0.0.1:34587,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/cluster_b48852c6-ec88-6722-faa9-8789e7cb8f82/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/cluster_b48852c6-ec88-6722-faa9-8789e7cb8f82/dfs/data/data1/current/BP-1053250174-172.31.14.131-1684774666509/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:348) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/cluster_b48852c6-ec88-6722-faa9-8789e7cb8f82/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/cluster_b48852c6-ec88-6722-faa9-8789e7cb8f82/dfs/data/data1/current/BP-1053250174-172.31.14.131-1684774666509/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy43.initReplicaRecovery(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolTranslatorPB.initReplicaRecovery(InterDatanodeProtocolTranslatorPB.java:83) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) ... 4 more 2023-05-22 16:58:37,326 INFO [Listener at localhost/34381] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774681643 after 4001ms 2023-05-22 16:58:37,326 DEBUG [Listener at localhost/34381] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774681643 2023-05-22 16:58:37,330 DEBUG [Listener at localhost/34381] wal.TestLogRolling(522): #6: [row1003/info:/1684774691746/Put/vlen=1045/seqid=0] 2023-05-22 16:58:37,330 DEBUG [Listener at localhost/34381] wal.TestLogRolling(522): #7: [row1004/info:/1684774693752/Put/vlen=1045/seqid=0] 2023-05-22 16:58:37,330 DEBUG [Listener at localhost/34381] wal.ProtobufLogReader(420): EOF at position 2425 2023-05-22 16:58:37,330 DEBUG [Listener at localhost/34381] wal.TestLogRolling(512): recovering lease for hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774697208 2023-05-22 16:58:37,330 INFO [Listener at localhost/34381] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774697208 2023-05-22 16:58:37,331 INFO [Listener at localhost/34381] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774697208 after 1ms 2023-05-22 16:58:37,331 DEBUG [Listener at localhost/34381] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774697208 2023-05-22 16:58:37,334 DEBUG [Listener at localhost/34381] wal.TestLogRolling(522): #9: [row1005/info:/1684774707290/Put/vlen=1045/seqid=0] 2023-05-22 16:58:37,334 DEBUG [Listener at localhost/34381] wal.TestLogRolling(512): recovering lease for hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774709293 2023-05-22 16:58:37,334 INFO [Listener at localhost/34381] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774709293 2023-05-22 16:58:37,334 WARN [IPC Server handler 1 on default port 35761] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774709293 has not been closed. Lease recovery is in progress. RecoveryId = 1024 for block blk_1073741841_1021 2023-05-22 16:58:37,335 INFO [Listener at localhost/34381] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774709293 after 1ms 2023-05-22 16:58:38,268 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1555909029_17 at /127.0.0.1:49020 [Receiving block BP-1053250174-172.31.14.131-1684774666509:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:34587:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49020 dst: /127.0.0.1:34587 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:34587 remote=/127.0.0.1:49020]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:58:38,269 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1555909029_17 at /127.0.0.1:40614 [Receiving block BP-1053250174-172.31.14.131-1684774666509:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:43269:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:40614 dst: /127.0.0.1:43269 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:58:38,269 WARN [ResponseProcessor for block BP-1053250174-172.31.14.131-1684774666509:blk_1073741841_1021] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1053250174-172.31.14.131-1684774666509:blk_1073741841_1021 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-22 16:58:38,270 WARN [DataStreamer for file /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774709293 block BP-1053250174-172.31.14.131-1684774666509:blk_1073741841_1021] hdfs.DataStreamer(1548): Error Recovery for BP-1053250174-172.31.14.131-1684774666509:blk_1073741841_1021 in pipeline [DatanodeInfoWithStorage[127.0.0.1:34587,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK], DatanodeInfoWithStorage[127.0.0.1:43269,DS-6c375f34-cb98-4373-8d8a-593a8c80713b,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:34587,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK]) is bad. 2023-05-22 16:58:38,275 WARN [DataStreamer for file /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774709293 block BP-1053250174-172.31.14.131-1684774666509:blk_1073741841_1021] hdfs.DataStreamer(823): DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1053250174-172.31.14.131-1684774666509:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:41,335 INFO [Listener at localhost/34381] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774709293 after 4001ms 2023-05-22 16:58:41,336 DEBUG [Listener at localhost/34381] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774709293 2023-05-22 16:58:41,339 DEBUG [Listener at localhost/34381] wal.ProtobufLogReader(420): EOF at position 83 2023-05-22 16:58:41,340 INFO [Listener at localhost/34381] regionserver.HRegion(2745): Flushing 9117e5e102caa78c7adf0f27314fd8c9 1/1 column families, dataSize=4.20 KB heapSize=4.75 KB 2023-05-22 16:58:41,341 WARN [RS:0;jenkins-hbase4:39651.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=11, requesting roll of WAL org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1053250174-172.31.14.131-1684774666509:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:41,342 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C39651%2C1684774667115:(num 1684774709293) roll requested 2023-05-22 16:58:41,342 DEBUG [Listener at localhost/34381] regionserver.HRegion(2446): Flush status journal for 9117e5e102caa78c7adf0f27314fd8c9: 2023-05-22 16:58:41,342 INFO [Listener at localhost/34381] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1053250174-172.31.14.131-1684774666509:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:41,344 INFO [Listener at localhost/34381] regionserver.HRegion(2745): Flushing 41d2520d95542d6d9d31ddf86bb334cb 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-22 16:58:41,344 DEBUG [Listener at localhost/34381] regionserver.HRegion(2446): Flush status journal for 41d2520d95542d6d9d31ddf86bb334cb: 2023-05-22 16:58:41,344 INFO [Listener at localhost/34381] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1053250174-172.31.14.131-1684774666509:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:41,346 INFO [Listener at localhost/34381] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.95 KB heapSize=5.48 KB 2023-05-22 16:58:41,346 WARN [RS_OPEN_META-regionserver/jenkins-hbase4:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34909,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:41,346 DEBUG [Listener at localhost/34381] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-22 16:58:41,347 INFO [Listener at localhost/34381] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34909,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:41,349 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-22 16:58:41,349 INFO [Listener at localhost/34381] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-22 16:58:41,349 DEBUG [Listener at localhost/34381] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x30d99e57 to 127.0.0.1:62530 2023-05-22 16:58:41,349 DEBUG [Listener at localhost/34381] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 16:58:41,350 DEBUG [Listener at localhost/34381] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-22 16:58:41,350 DEBUG [Listener at localhost/34381] util.JVMClusterUtil(257): Found active master hash=487443407, stopped=false 2023-05-22 16:58:41,350 INFO [Listener at localhost/34381] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,44291,1684774667074 2023-05-22 16:58:41,353 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774709293 newFile=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774721342 2023-05-22 16:58:41,353 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): regionserver:39651-0x10053d3c6300001, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-22 16:58:41,353 INFO [Listener at localhost/34381] procedure2.ProcedureExecutor(629): Stopping 2023-05-22 16:58:41,353 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL 2023-05-22 16:58:41,353 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-22 16:58:41,353 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774709293 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774721342 2023-05-22 16:58:41,353 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:58:41,353 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39651-0x10053d3c6300001, quorum=127.0.0.1:62530, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 16:58:41,353 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1053250174-172.31.14.131-1684774666509:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:41,353 DEBUG [Listener at localhost/34381] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x671ca5df to 127.0.0.1:62530 2023-05-22 16:58:41,354 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(462): Close of WAL hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774709293 failed. Cause="Unexpected BlockUCState: BP-1053250174-172.31.14.131-1684774666509:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) ", errors=3, hasUnflushedEntries=false 2023-05-22 16:58:41,354 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 16:58:41,354 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(426): Failed close of WAL writer hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774709293, unflushedEntries=0 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1053250174-172.31.14.131-1684774666509:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:41,354 DEBUG [Listener at localhost/34381] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 16:58:41,355 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(221): Roll wal failed and waiting timeout, will not retry org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774709293, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1053250174-172.31.14.131-1684774666509:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:41,355 INFO [Listener at localhost/34381] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,39651,1684774667115' ***** 2023-05-22 16:58:41,355 INFO [Listener at localhost/34381] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-22 16:58:41,355 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115 2023-05-22 16:58:41,355 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:324) at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:151) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:105) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.writeWALTrailerAndMagic(ProtobufLogWriter.java:140) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.writeWALTrailer(AbstractProtobufLogWriter.java:234) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.close(ProtobufLogWriter.java:67) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doShutdown(FSHLog.java:492) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:951) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:946) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-05-22 16:58:41,356 INFO [RS:0;jenkins-hbase4:39651] regionserver.HeapMemoryManager(220): Stopping 2023-05-22 16:58:41,359 INFO [RS:0;jenkins-hbase4:39651] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-22 16:58:41,359 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-22 16:58:41,360 INFO [RS:0;jenkins-hbase4:39651] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-22 16:58:41,360 INFO [RS:0;jenkins-hbase4:39651] regionserver.HRegionServer(3303): Received CLOSE for 9117e5e102caa78c7adf0f27314fd8c9 2023-05-22 16:58:41,360 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115 2023-05-22 16:58:41,360 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34909,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:41,360 INFO [RS:0;jenkins-hbase4:39651] regionserver.HRegionServer(3303): Received CLOSE for 41d2520d95542d6d9d31ddf86bb334cb 2023-05-22 16:58:41,361 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9117e5e102caa78c7adf0f27314fd8c9, disabling compactions & flushes 2023-05-22 16:58:41,361 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(165): Failed to shutdown wal java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34909,DS-2d49f7ba-7197-4f3b-939d-e193de5ba405,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:41,361 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9. 2023-05-22 16:58:41,361 ERROR [regionserver/jenkins-hbase4:0.logRoller] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase4.apache.org,39651,1684774667115: Failed log close in log roller ***** org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774709293, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1053250174-172.31.14.131-1684774666509:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:41,361 INFO [RS:0;jenkins-hbase4:39651] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39651,1684774667115 2023-05-22 16:58:41,362 ERROR [regionserver/jenkins-hbase4:0.logRoller] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-05-22 16:58:41,362 DEBUG [RS:0;jenkins-hbase4:39651] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7418a973 to 127.0.0.1:62530 2023-05-22 16:58:41,361 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9. 2023-05-22 16:58:41,362 DEBUG [RS:0;jenkins-hbase4:39651] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 16:58:41,362 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9. after waiting 0 ms 2023-05-22 16:58:41,363 INFO [RS:0;jenkins-hbase4:39651] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-22 16:58:41,363 INFO [RS:0;jenkins-hbase4:39651] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-22 16:58:41,362 DEBUG [regionserver/jenkins-hbase4:0.logRoller] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-05-22 16:58:41,363 INFO [RS:0;jenkins-hbase4:39651] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-22 16:58:41,363 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9. 2023-05-22 16:58:41,363 INFO [RS:0;jenkins-hbase4:39651] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-22 16:58:41,363 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9117e5e102caa78c7adf0f27314fd8c9: 2023-05-22 16:58:41,363 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9. 2023-05-22 16:58:41,363 INFO [RS:0;jenkins-hbase4:39651] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-22 16:58:41,363 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 41d2520d95542d6d9d31ddf86bb334cb, disabling compactions & flushes 2023-05-22 16:58:41,363 DEBUG [RS:0;jenkins-hbase4:39651] regionserver.HRegionServer(1478): Online Regions={9117e5e102caa78c7adf0f27314fd8c9=TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9., 41d2520d95542d6d9d31ddf86bb334cb=hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb., 1588230740=hbase:meta,,1.1588230740} 2023-05-22 16:58:41,363 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-22 16:58:41,363 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb. 2023-05-22 16:58:41,363 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-22 16:58:41,363 INFO [RS:0;jenkins-hbase4:39651] regionserver.HRegionServer(3303): Received CLOSE for 9117e5e102caa78c7adf0f27314fd8c9 2023-05-22 16:58:41,363 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-22 16:58:41,363 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb. 2023-05-22 16:58:41,364 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-22 16:58:41,364 DEBUG [RS:0;jenkins-hbase4:39651] regionserver.HRegionServer(1504): Waiting on 1588230740, 41d2520d95542d6d9d31ddf86bb334cb, 9117e5e102caa78c7adf0f27314fd8c9 2023-05-22 16:58:41,364 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-22 16:58:41,364 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb. after waiting 0 ms 2023-05-22 16:58:41,364 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb. 2023-05-22 16:58:41,364 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 41d2520d95542d6d9d31ddf86bb334cb: 2023-05-22 16:58:41,364 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb. 2023-05-22 16:58:41,364 DEBUG [regionserver/jenkins-hbase4:0.logRoller] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-05-22 16:58:41,364 ERROR [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1825): Memstore data size is 3024 in region hbase:meta,,1.1588230740 2023-05-22 16:58:41,364 DEBUG [regionserver/jenkins-hbase4:0.logRoller] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-05-22 16:58:41,364 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9117e5e102caa78c7adf0f27314fd8c9, disabling compactions & flushes 2023-05-22 16:58:41,364 DEBUG [regionserver/jenkins-hbase4:0.logRoller] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-05-22 16:58:41,364 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9. 2023-05-22 16:58:41,365 INFO [regionserver/jenkins-hbase4:0.logRoller] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "Verbose": false, "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1015545856, "init": 513802240, "max": 2051014656, "used": 465837768 }, "NonHeapMemoryUsage": { "committed": 138829824, "init": 2555904, "max": -1, "used": 136252736 }, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-05-22 16:58:41,364 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-22 16:58:41,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9. 2023-05-22 16:58:41,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9. after waiting 0 ms 2023-05-22 16:58:41,365 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-22 16:58:41,365 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-22 16:58:41,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9. 2023-05-22 16:58:41,365 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-22 16:58:41,365 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1825): Memstore data size is 4304 in region TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9. 2023-05-22 16:58:41,366 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9. 2023-05-22 16:58:41,366 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44291] master.MasterRpcServices(609): jenkins-hbase4.apache.org,39651,1684774667115 reported a fatal error: ***** ABORTING region server jenkins-hbase4.apache.org,39651,1684774667115: Failed log close in log roller ***** Cause: org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/WALs/jenkins-hbase4.apache.org,39651,1684774667115/jenkins-hbase4.apache.org%2C39651%2C1684774667115.1684774709293, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1053250174-172.31.14.131-1684774666509:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-22 16:58:41,366 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9117e5e102caa78c7adf0f27314fd8c9: 2023-05-22 16:58:41,366 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRollOnPipelineRestart,,1684774668250.9117e5e102caa78c7adf0f27314fd8c9. 2023-05-22 16:58:41,366 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C39651%2C1684774667115.meta:.meta(num 1684774667619) roll requested 2023-05-22 16:58:41,366 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(874): WAL closed. Skipping rolling of writer 2023-05-22 16:58:41,370 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-05-22 16:58:41,370 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-05-22 16:58:41,372 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-22 16:58:41,564 INFO [RS:0;jenkins-hbase4:39651] regionserver.HRegionServer(3303): Received CLOSE for 41d2520d95542d6d9d31ddf86bb334cb 2023-05-22 16:58:41,564 DEBUG [RS:0;jenkins-hbase4:39651] regionserver.HRegionServer(1504): Waiting on 41d2520d95542d6d9d31ddf86bb334cb 2023-05-22 16:58:41,564 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 41d2520d95542d6d9d31ddf86bb334cb, disabling compactions & flushes 2023-05-22 16:58:41,564 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb. 2023-05-22 16:58:41,564 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb. 2023-05-22 16:58:41,564 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb. after waiting 0 ms 2023-05-22 16:58:41,564 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb. 2023-05-22 16:58:41,565 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1825): Memstore data size is 78 in region hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb. 2023-05-22 16:58:41,565 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb. 2023-05-22 16:58:41,565 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 41d2520d95542d6d9d31ddf86bb334cb: 2023-05-22 16:58:41,565 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1684774667673.41d2520d95542d6d9d31ddf86bb334cb. 2023-05-22 16:58:41,764 INFO [RS:0;jenkins-hbase4:39651] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39651,1684774667115; all regions closed. 2023-05-22 16:58:41,764 DEBUG [RS:0;jenkins-hbase4:39651] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 16:58:41,764 INFO [RS:0;jenkins-hbase4:39651] regionserver.LeaseManager(133): Closed leases 2023-05-22 16:58:41,765 INFO [RS:0;jenkins-hbase4:39651] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-22 16:58:41,765 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-22 16:58:41,766 INFO [RS:0;jenkins-hbase4:39651] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39651 2023-05-22 16:58:41,769 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): regionserver:39651-0x10053d3c6300001, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39651,1684774667115 2023-05-22 16:58:41,769 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): regionserver:39651-0x10053d3c6300001, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-22 16:58:41,769 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-22 16:58:41,770 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39651,1684774667115] 2023-05-22 16:58:41,770 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39651,1684774667115; numProcessing=1 2023-05-22 16:58:41,771 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39651,1684774667115 already deleted, retry=false 2023-05-22 16:58:41,771 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39651,1684774667115 expired; onlineServers=0 2023-05-22 16:58:41,771 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,44291,1684774667074' ***** 2023-05-22 16:58:41,771 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-22 16:58:41,771 DEBUG [M:0;jenkins-hbase4:44291] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@19324cb6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-22 16:58:41,772 INFO [M:0;jenkins-hbase4:44291] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44291,1684774667074 2023-05-22 16:58:41,772 INFO [M:0;jenkins-hbase4:44291] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44291,1684774667074; all regions closed. 2023-05-22 16:58:41,772 DEBUG [M:0;jenkins-hbase4:44291] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 16:58:41,772 DEBUG [M:0;jenkins-hbase4:44291] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-22 16:58:41,772 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-22 16:58:41,772 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1684774667262] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1684774667262,5,FailOnTimeoutGroup] 2023-05-22 16:58:41,772 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1684774667262] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1684774667262,5,FailOnTimeoutGroup] 2023-05-22 16:58:41,772 DEBUG [M:0;jenkins-hbase4:44291] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-22 16:58:41,773 INFO [M:0;jenkins-hbase4:44291] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-22 16:58:41,773 INFO [M:0;jenkins-hbase4:44291] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-22 16:58:41,773 INFO [M:0;jenkins-hbase4:44291] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-22 16:58:41,773 DEBUG [M:0;jenkins-hbase4:44291] master.HMaster(1512): Stopping service threads 2023-05-22 16:58:41,773 INFO [M:0;jenkins-hbase4:44291] procedure2.RemoteProcedureDispatcher(118): Stopping procedure remote dispatcher 2023-05-22 16:58:41,774 ERROR [M:0;jenkins-hbase4:44291] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-22 16:58:41,774 INFO [M:0;jenkins-hbase4:44291] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-22 16:58:41,774 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-22 16:58:41,775 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-22 16:58:41,775 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:58:41,775 DEBUG [M:0;jenkins-hbase4:44291] zookeeper.ZKUtil(398): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-22 16:58:41,775 WARN [M:0;jenkins-hbase4:44291] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-22 16:58:41,775 INFO [M:0;jenkins-hbase4:44291] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-22 16:58:41,775 INFO [M:0;jenkins-hbase4:44291] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-22 16:58:41,775 DEBUG [M:0;jenkins-hbase4:44291] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-22 16:58:41,776 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-22 16:58:41,776 INFO [M:0;jenkins-hbase4:44291] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:58:41,776 DEBUG [M:0;jenkins-hbase4:44291] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:58:41,776 DEBUG [M:0;jenkins-hbase4:44291] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-22 16:58:41,776 DEBUG [M:0;jenkins-hbase4:44291] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:58:41,776 INFO [M:0;jenkins-hbase4:44291] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.15 KB heapSize=45.78 KB 2023-05-22 16:58:41,789 INFO [M:0;jenkins-hbase4:44291] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.15 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/40f2d32f98e24b97950037c944e6d784 2023-05-22 16:58:41,794 DEBUG [M:0;jenkins-hbase4:44291] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/40f2d32f98e24b97950037c944e6d784 as hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/40f2d32f98e24b97950037c944e6d784 2023-05-22 16:58:41,799 INFO [M:0;jenkins-hbase4:44291] regionserver.HStore(1080): Added hdfs://localhost:35761/user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/40f2d32f98e24b97950037c944e6d784, entries=11, sequenceid=92, filesize=7.0 K 2023-05-22 16:58:41,800 INFO [M:0;jenkins-hbase4:44291] regionserver.HRegion(2948): Finished flush of dataSize ~38.15 KB/39063, heapSize ~45.77 KB/46864, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=92, compaction requested=false 2023-05-22 16:58:41,801 INFO [M:0;jenkins-hbase4:44291] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:58:41,801 DEBUG [M:0;jenkins-hbase4:44291] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-22 16:58:41,802 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/bbc6f9c2-1cf6-5bd5-9fe5-f5238e51f77f/MasterData/WALs/jenkins-hbase4.apache.org,44291,1684774667074 2023-05-22 16:58:41,805 INFO [M:0;jenkins-hbase4:44291] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-22 16:58:41,805 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-22 16:58:41,806 INFO [M:0;jenkins-hbase4:44291] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44291 2023-05-22 16:58:41,808 DEBUG [M:0;jenkins-hbase4:44291] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,44291,1684774667074 already deleted, retry=false 2023-05-22 16:58:41,870 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): regionserver:39651-0x10053d3c6300001, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 16:58:41,870 INFO [RS:0;jenkins-hbase4:39651] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39651,1684774667115; zookeeper connection closed. 2023-05-22 16:58:41,870 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): regionserver:39651-0x10053d3c6300001, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 16:58:41,871 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@54522f6e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@54522f6e 2023-05-22 16:58:41,873 INFO [Listener at localhost/34381] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-22 16:58:41,970 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 16:58:41,970 INFO [M:0;jenkins-hbase4:44291] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44291,1684774667074; zookeeper connection closed. 2023-05-22 16:58:41,970 DEBUG [Listener at localhost/42493-EventThread] zookeeper.ZKWatcher(600): master:44291-0x10053d3c6300000, quorum=127.0.0.1:62530, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 16:58:41,971 WARN [Listener at localhost/34381] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-22 16:58:41,975 INFO [Listener at localhost/34381] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 16:58:42,078 WARN [BP-1053250174-172.31.14.131-1684774666509 heartbeating to localhost/127.0.0.1:35761] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-22 16:58:42,079 WARN [BP-1053250174-172.31.14.131-1684774666509 heartbeating to localhost/127.0.0.1:35761] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1053250174-172.31.14.131-1684774666509 (Datanode Uuid 672a4885-5bed-4f34-bcc8-7168b7ce02b7) service to localhost/127.0.0.1:35761 2023-05-22 16:58:42,079 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/cluster_b48852c6-ec88-6722-faa9-8789e7cb8f82/dfs/data/data3/current/BP-1053250174-172.31.14.131-1684774666509] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:58:42,079 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/cluster_b48852c6-ec88-6722-faa9-8789e7cb8f82/dfs/data/data4/current/BP-1053250174-172.31.14.131-1684774666509] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:58:42,081 WARN [Listener at localhost/34381] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-22 16:58:42,084 INFO [Listener at localhost/34381] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 16:58:42,188 WARN [BP-1053250174-172.31.14.131-1684774666509 heartbeating to localhost/127.0.0.1:35761] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-22 16:58:42,188 WARN [BP-1053250174-172.31.14.131-1684774666509 heartbeating to localhost/127.0.0.1:35761] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1053250174-172.31.14.131-1684774666509 (Datanode Uuid 118b38cc-8262-4d98-a666-1a9802e7909e) service to localhost/127.0.0.1:35761 2023-05-22 16:58:42,188 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/cluster_b48852c6-ec88-6722-faa9-8789e7cb8f82/dfs/data/data1/current/BP-1053250174-172.31.14.131-1684774666509] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:58:42,189 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/cluster_b48852c6-ec88-6722-faa9-8789e7cb8f82/dfs/data/data2/current/BP-1053250174-172.31.14.131-1684774666509] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:58:42,199 INFO [Listener at localhost/34381] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 16:58:42,310 INFO [Listener at localhost/34381] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-22 16:58:42,323 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-22 16:58:42,332 INFO [Listener at localhost/34381] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=85 (was 74) Potentially hanging thread: nioEventLoopGroup-28-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (466462416) connection to localhost/127.0.0.1:35761 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-29-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:35761 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:35761 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (466462416) connection to localhost/127.0.0.1:35761 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (466462416) connection to localhost/127.0.0.1:35761 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-26-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34381 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=463 (was 460) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=105 (was 95) - SystemLoadAverage LEAK? -, ProcessCount=169 (was 169), AvailableMemoryMB=5131 (was 5516) 2023-05-22 16:58:42,341 INFO [Listener at localhost/34381] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=85, OpenFileDescriptor=463, MaxFileDescriptor=60000, SystemLoadAverage=105, ProcessCount=169, AvailableMemoryMB=5132 2023-05-22 16:58:42,342 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-22 16:58:42,342 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/hadoop.log.dir so I do NOT create it in target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c 2023-05-22 16:58:42,342 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6e653666-476b-4f6f-6ea2-add1d3123401/hadoop.tmp.dir so I do NOT create it in target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c 2023-05-22 16:58:42,342 INFO [Listener at localhost/34381] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/cluster_69e11878-5ed0-e17b-f7c2-286839d397f2, deleteOnExit=true 2023-05-22 16:58:42,342 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-22 16:58:42,342 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/test.cache.data in system properties and HBase conf 2023-05-22 16:58:42,343 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/hadoop.tmp.dir in system properties and HBase conf 2023-05-22 16:58:42,343 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/hadoop.log.dir in system properties and HBase conf 2023-05-22 16:58:42,343 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-22 16:58:42,343 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-22 16:58:42,343 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-22 16:58:42,343 DEBUG [Listener at localhost/34381] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-22 16:58:42,343 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-22 16:58:42,343 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-22 16:58:42,344 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-22 16:58:42,344 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-22 16:58:42,344 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-22 16:58:42,344 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-22 16:58:42,344 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-22 16:58:42,344 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-22 16:58:42,344 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-22 16:58:42,344 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/nfs.dump.dir in system properties and HBase conf 2023-05-22 16:58:42,344 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/java.io.tmpdir in system properties and HBase conf 2023-05-22 16:58:42,344 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-22 16:58:42,345 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-22 16:58:42,345 INFO [Listener at localhost/34381] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-22 16:58:42,347 WARN [Listener at localhost/34381] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-22 16:58:42,350 WARN [Listener at localhost/34381] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-22 16:58:42,350 WARN [Listener at localhost/34381] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-22 16:58:42,393 WARN [Listener at localhost/34381] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 16:58:42,395 INFO [Listener at localhost/34381] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 16:58:42,400 INFO [Listener at localhost/34381] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/java.io.tmpdir/Jetty_localhost_41411_hdfs____.yamf1b/webapp 2023-05-22 16:58:42,492 INFO [Listener at localhost/34381] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41411 2023-05-22 16:58:42,493 WARN [Listener at localhost/34381] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-22 16:58:42,496 WARN [Listener at localhost/34381] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-22 16:58:42,496 WARN [Listener at localhost/34381] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-22 16:58:42,534 WARN [Listener at localhost/35245] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 16:58:42,543 WARN [Listener at localhost/35245] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-22 16:58:42,545 WARN [Listener at localhost/35245] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 16:58:42,546 INFO [Listener at localhost/35245] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 16:58:42,551 INFO [Listener at localhost/35245] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/java.io.tmpdir/Jetty_localhost_33063_datanode____.s9huz0/webapp 2023-05-22 16:58:42,640 INFO [Listener at localhost/35245] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33063 2023-05-22 16:58:42,646 WARN [Listener at localhost/42701] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 16:58:42,656 WARN [Listener at localhost/42701] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-22 16:58:42,658 WARN [Listener at localhost/42701] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 16:58:42,659 INFO [Listener at localhost/42701] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 16:58:42,666 INFO [Listener at localhost/42701] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/java.io.tmpdir/Jetty_localhost_44667_datanode____.q7cjhe/webapp 2023-05-22 16:58:42,747 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbd0458034f65bdb0: Processing first storage report for DS-f3efe6d4-c203-4a50-bc69-93b792fcd533 from datanode cc29ae53-7830-4619-92e1-66b7b014800f 2023-05-22 16:58:42,747 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbd0458034f65bdb0: from storage DS-f3efe6d4-c203-4a50-bc69-93b792fcd533 node DatanodeRegistration(127.0.0.1:36275, datanodeUuid=cc29ae53-7830-4619-92e1-66b7b014800f, infoPort=32935, infoSecurePort=0, ipcPort=42701, storageInfo=lv=-57;cid=testClusterID;nsid=1732748609;c=1684774722353), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:58:42,747 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbd0458034f65bdb0: Processing first storage report for DS-df797e8d-47ff-493d-b459-a61ea506326b from datanode cc29ae53-7830-4619-92e1-66b7b014800f 2023-05-22 16:58:42,747 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbd0458034f65bdb0: from storage DS-df797e8d-47ff-493d-b459-a61ea506326b node DatanodeRegistration(127.0.0.1:36275, datanodeUuid=cc29ae53-7830-4619-92e1-66b7b014800f, infoPort=32935, infoSecurePort=0, ipcPort=42701, storageInfo=lv=-57;cid=testClusterID;nsid=1732748609;c=1684774722353), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:58:42,772 INFO [Listener at localhost/42701] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44667 2023-05-22 16:58:42,786 WARN [Listener at localhost/43469] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 16:58:42,872 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6a689e721a8d023c: Processing first storage report for DS-19a80b28-8c32-4282-981d-30a9bb82b983 from datanode 1396087b-4d56-4bd2-b72a-a5ac82a3cb78 2023-05-22 16:58:42,872 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6a689e721a8d023c: from storage DS-19a80b28-8c32-4282-981d-30a9bb82b983 node DatanodeRegistration(127.0.0.1:40233, datanodeUuid=1396087b-4d56-4bd2-b72a-a5ac82a3cb78, infoPort=35765, infoSecurePort=0, ipcPort=43469, storageInfo=lv=-57;cid=testClusterID;nsid=1732748609;c=1684774722353), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:58:42,872 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6a689e721a8d023c: Processing first storage report for DS-911e4414-132a-44bf-94ed-58a7ba20175a from datanode 1396087b-4d56-4bd2-b72a-a5ac82a3cb78 2023-05-22 16:58:42,872 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6a689e721a8d023c: from storage DS-911e4414-132a-44bf-94ed-58a7ba20175a node DatanodeRegistration(127.0.0.1:40233, datanodeUuid=1396087b-4d56-4bd2-b72a-a5ac82a3cb78, infoPort=35765, infoSecurePort=0, ipcPort=43469, storageInfo=lv=-57;cid=testClusterID;nsid=1732748609;c=1684774722353), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-22 16:58:42,892 DEBUG [Listener at localhost/43469] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c 2023-05-22 16:58:42,895 INFO [Listener at localhost/43469] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/cluster_69e11878-5ed0-e17b-f7c2-286839d397f2/zookeeper_0, clientPort=54040, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/cluster_69e11878-5ed0-e17b-f7c2-286839d397f2/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/cluster_69e11878-5ed0-e17b-f7c2-286839d397f2/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-22 16:58:42,896 INFO [Listener at localhost/43469] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=54040 2023-05-22 16:58:42,896 INFO [Listener at localhost/43469] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:58:42,897 INFO [Listener at localhost/43469] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:58:42,908 INFO [Listener at localhost/43469] util.FSUtils(471): Created version file at hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110 with version=8 2023-05-22 16:58:42,908 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/hbase-staging 2023-05-22 16:58:42,910 INFO [Listener at localhost/43469] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-22 16:58:42,910 INFO [Listener at localhost/43469] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 16:58:42,910 INFO [Listener at localhost/43469] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-22 16:58:42,910 INFO [Listener at localhost/43469] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-22 16:58:42,910 INFO [Listener at localhost/43469] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 16:58:42,910 INFO [Listener at localhost/43469] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-22 16:58:42,910 INFO [Listener at localhost/43469] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-22 16:58:42,912 INFO [Listener at localhost/43469] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35369 2023-05-22 16:58:42,912 INFO [Listener at localhost/43469] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:58:42,913 INFO [Listener at localhost/43469] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:58:42,914 INFO [Listener at localhost/43469] zookeeper.RecoverableZooKeeper(93): Process identifier=master:35369 connecting to ZooKeeper ensemble=127.0.0.1:54040 2023-05-22 16:58:42,921 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:353690x0, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-22 16:58:42,921 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:35369-0x10053d4a0510000 connected 2023-05-22 16:58:42,935 DEBUG [Listener at localhost/43469] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-22 16:58:42,935 DEBUG [Listener at localhost/43469] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 16:58:42,936 DEBUG [Listener at localhost/43469] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-22 16:58:42,936 DEBUG [Listener at localhost/43469] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35369 2023-05-22 16:58:42,936 DEBUG [Listener at localhost/43469] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35369 2023-05-22 16:58:42,936 DEBUG [Listener at localhost/43469] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35369 2023-05-22 16:58:42,937 DEBUG [Listener at localhost/43469] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35369 2023-05-22 16:58:42,937 DEBUG [Listener at localhost/43469] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35369 2023-05-22 16:58:42,937 INFO [Listener at localhost/43469] master.HMaster(444): hbase.rootdir=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110, hbase.cluster.distributed=false 2023-05-22 16:58:42,949 INFO [Listener at localhost/43469] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-22 16:58:42,949 INFO [Listener at localhost/43469] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 16:58:42,950 INFO [Listener at localhost/43469] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-22 16:58:42,950 INFO [Listener at localhost/43469] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-22 16:58:42,950 INFO [Listener at localhost/43469] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 16:58:42,950 INFO [Listener at localhost/43469] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-22 16:58:42,950 INFO [Listener at localhost/43469] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-22 16:58:42,951 INFO [Listener at localhost/43469] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42727 2023-05-22 16:58:42,951 INFO [Listener at localhost/43469] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-22 16:58:42,952 DEBUG [Listener at localhost/43469] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-22 16:58:42,953 INFO [Listener at localhost/43469] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:58:42,953 INFO [Listener at localhost/43469] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:58:42,954 INFO [Listener at localhost/43469] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42727 connecting to ZooKeeper ensemble=127.0.0.1:54040 2023-05-22 16:58:42,957 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:427270x0, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-22 16:58:42,957 DEBUG [Listener at localhost/43469] zookeeper.ZKUtil(164): regionserver:427270x0, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-22 16:58:42,958 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42727-0x10053d4a0510001 connected 2023-05-22 16:58:42,958 DEBUG [Listener at localhost/43469] zookeeper.ZKUtil(164): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 16:58:42,958 DEBUG [Listener at localhost/43469] zookeeper.ZKUtil(164): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-22 16:58:42,959 DEBUG [Listener at localhost/43469] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42727 2023-05-22 16:58:42,959 DEBUG [Listener at localhost/43469] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42727 2023-05-22 16:58:42,959 DEBUG [Listener at localhost/43469] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42727 2023-05-22 16:58:42,960 DEBUG [Listener at localhost/43469] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42727 2023-05-22 16:58:42,960 DEBUG [Listener at localhost/43469] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42727 2023-05-22 16:58:42,961 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,35369,1684774722909 2023-05-22 16:58:42,962 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-22 16:58:42,962 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,35369,1684774722909 2023-05-22 16:58:42,963 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-22 16:58:42,963 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-22 16:58:42,963 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:58:42,964 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-22 16:58:42,965 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,35369,1684774722909 from backup master directory 2023-05-22 16:58:42,965 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-22 16:58:42,967 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,35369,1684774722909 2023-05-22 16:58:42,967 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-22 16:58:42,967 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-22 16:58:42,967 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,35369,1684774722909 2023-05-22 16:58:42,979 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/hbase.id with ID: fb50efe3-f850-4826-a942-faa89d327ba5 2023-05-22 16:58:42,989 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:58:42,992 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:58:43,001 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x26802d2f to 127.0.0.1:54040 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-22 16:58:43,005 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5d73292a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-22 16:58:43,005 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-22 16:58:43,006 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-22 16:58:43,006 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-22 16:58:43,007 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/MasterData/data/master/store-tmp 2023-05-22 16:58:43,013 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:58:43,014 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-22 16:58:43,014 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:58:43,014 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:58:43,014 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-22 16:58:43,014 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:58:43,014 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:58:43,014 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-22 16:58:43,014 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/MasterData/WALs/jenkins-hbase4.apache.org,35369,1684774722909 2023-05-22 16:58:43,017 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35369%2C1684774722909, suffix=, logDir=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/MasterData/WALs/jenkins-hbase4.apache.org,35369,1684774722909, archiveDir=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/MasterData/oldWALs, maxLogs=10 2023-05-22 16:58:43,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/MasterData/WALs/jenkins-hbase4.apache.org,35369,1684774722909/jenkins-hbase4.apache.org%2C35369%2C1684774722909.1684774723017 2023-05-22 16:58:43,022 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40233,DS-19a80b28-8c32-4282-981d-30a9bb82b983,DISK], DatanodeInfoWithStorage[127.0.0.1:36275,DS-f3efe6d4-c203-4a50-bc69-93b792fcd533,DISK]] 2023-05-22 16:58:43,022 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-22 16:58:43,023 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:58:43,023 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:58:43,023 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:58:43,028 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:58:43,029 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-22 16:58:43,030 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-22 16:58:43,030 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:58:43,031 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:58:43,031 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:58:43,034 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:58:43,037 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-22 16:58:43,037 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=727414, jitterRate=-0.07504649460315704}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-22 16:58:43,037 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-22 16:58:43,038 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-22 16:58:43,038 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-22 16:58:43,039 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-22 16:58:43,039 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-22 16:58:43,039 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-22 16:58:43,039 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-22 16:58:43,039 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(95): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-22 16:58:43,040 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-22 16:58:43,041 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-22 16:58:43,052 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-22 16:58:43,052 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-22 16:58:43,053 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-22 16:58:43,053 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-22 16:58:43,053 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-22 16:58:43,055 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:58:43,055 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-22 16:58:43,055 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-22 16:58:43,056 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-22 16:58:43,059 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-22 16:58:43,059 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-22 16:58:43,059 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:58:43,059 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,35369,1684774722909, sessionid=0x10053d4a0510000, setting cluster-up flag (Was=false) 2023-05-22 16:58:43,062 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:58:43,068 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-22 16:58:43,069 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35369,1684774722909 2023-05-22 16:58:43,071 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:58:43,076 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-22 16:58:43,077 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35369,1684774722909 2023-05-22 16:58:43,078 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/.hbase-snapshot/.tmp 2023-05-22 16:58:43,080 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-22 16:58:43,080 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-22 16:58:43,080 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-22 16:58:43,080 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-22 16:58:43,080 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-22 16:58:43,080 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-22 16:58:43,080 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:58:43,080 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-22 16:58:43,080 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:58:43,083 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1684774753083 2023-05-22 16:58:43,083 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-22 16:58:43,083 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-22 16:58:43,083 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-22 16:58:43,083 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-22 16:58:43,083 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-22 16:58:43,083 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-22 16:58:43,083 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-22 16:58:43,084 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-22 16:58:43,084 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-22 16:58:43,084 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-22 16:58:43,084 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-22 16:58:43,084 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-22 16:58:43,084 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-22 16:58:43,084 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-22 16:58:43,085 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1684774723084,5,FailOnTimeoutGroup] 2023-05-22 16:58:43,085 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1684774723085,5,FailOnTimeoutGroup] 2023-05-22 16:58:43,085 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-22 16:58:43,085 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-22 16:58:43,085 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-22 16:58:43,085 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-22 16:58:43,085 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-22 16:58:43,098 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-22 16:58:43,098 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-22 16:58:43,098 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110 2023-05-22 16:58:43,105 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:58:43,107 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-22 16:58:43,108 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/1588230740/info 2023-05-22 16:58:43,108 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-22 16:58:43,109 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:58:43,109 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-22 16:58:43,110 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/1588230740/rep_barrier 2023-05-22 16:58:43,110 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-22 16:58:43,111 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:58:43,111 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-22 16:58:43,112 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/1588230740/table 2023-05-22 16:58:43,112 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-22 16:58:43,113 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:58:43,113 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/1588230740 2023-05-22 16:58:43,113 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/1588230740 2023-05-22 16:58:43,115 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-22 16:58:43,116 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-22 16:58:43,118 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-22 16:58:43,118 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=779118, jitterRate=-0.009300589561462402}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-22 16:58:43,118 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-22 16:58:43,119 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-22 16:58:43,119 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-22 16:58:43,119 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-22 16:58:43,119 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-22 16:58:43,119 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-22 16:58:43,119 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-22 16:58:43,119 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-22 16:58:43,120 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-22 16:58:43,120 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-22 16:58:43,120 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-22 16:58:43,122 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-22 16:58:43,123 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-22 16:58:43,162 INFO [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(951): ClusterId : fb50efe3-f850-4826-a942-faa89d327ba5 2023-05-22 16:58:43,162 DEBUG [RS:0;jenkins-hbase4:42727] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-22 16:58:43,164 DEBUG [RS:0;jenkins-hbase4:42727] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-22 16:58:43,165 DEBUG [RS:0;jenkins-hbase4:42727] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-22 16:58:43,168 DEBUG [RS:0;jenkins-hbase4:42727] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-22 16:58:43,169 DEBUG [RS:0;jenkins-hbase4:42727] zookeeper.ReadOnlyZKClient(139): Connect 0x3316c822 to 127.0.0.1:54040 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-22 16:58:43,173 DEBUG [RS:0;jenkins-hbase4:42727] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3dd12030, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-22 16:58:43,173 DEBUG [RS:0;jenkins-hbase4:42727] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@39eea352, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-22 16:58:43,182 DEBUG [RS:0;jenkins-hbase4:42727] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:42727 2023-05-22 16:58:43,182 INFO [RS:0;jenkins-hbase4:42727] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-22 16:58:43,182 INFO [RS:0;jenkins-hbase4:42727] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-22 16:58:43,182 DEBUG [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1022): About to register with Master. 2023-05-22 16:58:43,183 INFO [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,35369,1684774722909 with isa=jenkins-hbase4.apache.org/172.31.14.131:42727, startcode=1684774722949 2023-05-22 16:58:43,183 DEBUG [RS:0;jenkins-hbase4:42727] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-22 16:58:43,186 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45463, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-05-22 16:58:43,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:43,187 DEBUG [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110 2023-05-22 16:58:43,187 DEBUG [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35245 2023-05-22 16:58:43,187 DEBUG [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-22 16:58:43,189 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-22 16:58:43,189 DEBUG [RS:0;jenkins-hbase4:42727] zookeeper.ZKUtil(162): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:43,189 WARN [RS:0;jenkins-hbase4:42727] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-22 16:58:43,189 INFO [RS:0;jenkins-hbase4:42727] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-22 16:58:43,189 DEBUG [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1946): logDir=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/WALs/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:43,190 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42727,1684774722949] 2023-05-22 16:58:43,194 DEBUG [RS:0;jenkins-hbase4:42727] zookeeper.ZKUtil(162): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:43,194 DEBUG [RS:0;jenkins-hbase4:42727] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-22 16:58:43,195 INFO [RS:0;jenkins-hbase4:42727] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-22 16:58:43,196 INFO [RS:0;jenkins-hbase4:42727] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-22 16:58:43,196 INFO [RS:0;jenkins-hbase4:42727] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-22 16:58:43,196 INFO [RS:0;jenkins-hbase4:42727] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-22 16:58:43,196 INFO [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-22 16:58:43,198 INFO [RS:0;jenkins-hbase4:42727] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-22 16:58:43,198 DEBUG [RS:0;jenkins-hbase4:42727] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:58:43,198 DEBUG [RS:0;jenkins-hbase4:42727] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:58:43,198 DEBUG [RS:0;jenkins-hbase4:42727] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:58:43,198 DEBUG [RS:0;jenkins-hbase4:42727] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:58:43,198 DEBUG [RS:0;jenkins-hbase4:42727] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:58:43,198 DEBUG [RS:0;jenkins-hbase4:42727] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-22 16:58:43,198 DEBUG [RS:0;jenkins-hbase4:42727] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:58:43,198 DEBUG [RS:0;jenkins-hbase4:42727] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:58:43,198 DEBUG [RS:0;jenkins-hbase4:42727] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:58:43,198 DEBUG [RS:0;jenkins-hbase4:42727] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:58:43,199 INFO [RS:0;jenkins-hbase4:42727] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-22 16:58:43,199 INFO [RS:0;jenkins-hbase4:42727] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-22 16:58:43,199 INFO [RS:0;jenkins-hbase4:42727] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-22 16:58:43,210 INFO [RS:0;jenkins-hbase4:42727] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-22 16:58:43,210 INFO [RS:0;jenkins-hbase4:42727] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42727,1684774722949-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-22 16:58:43,220 INFO [RS:0;jenkins-hbase4:42727] regionserver.Replication(203): jenkins-hbase4.apache.org,42727,1684774722949 started 2023-05-22 16:58:43,220 INFO [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42727,1684774722949, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42727, sessionid=0x10053d4a0510001 2023-05-22 16:58:43,220 DEBUG [RS:0;jenkins-hbase4:42727] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-22 16:58:43,220 DEBUG [RS:0;jenkins-hbase4:42727] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:43,220 DEBUG [RS:0;jenkins-hbase4:42727] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42727,1684774722949' 2023-05-22 16:58:43,220 DEBUG [RS:0;jenkins-hbase4:42727] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-22 16:58:43,221 DEBUG [RS:0;jenkins-hbase4:42727] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-22 16:58:43,221 DEBUG [RS:0;jenkins-hbase4:42727] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-22 16:58:43,221 DEBUG [RS:0;jenkins-hbase4:42727] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-22 16:58:43,221 DEBUG [RS:0;jenkins-hbase4:42727] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:43,221 DEBUG [RS:0;jenkins-hbase4:42727] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42727,1684774722949' 2023-05-22 16:58:43,221 DEBUG [RS:0;jenkins-hbase4:42727] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-22 16:58:43,221 DEBUG [RS:0;jenkins-hbase4:42727] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-22 16:58:43,222 DEBUG [RS:0;jenkins-hbase4:42727] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-22 16:58:43,222 INFO [RS:0;jenkins-hbase4:42727] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-22 16:58:43,222 INFO [RS:0;jenkins-hbase4:42727] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-22 16:58:43,273 DEBUG [jenkins-hbase4:35369] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-22 16:58:43,274 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42727,1684774722949, state=OPENING 2023-05-22 16:58:43,276 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-22 16:58:43,278 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:58:43,278 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42727,1684774722949}] 2023-05-22 16:58:43,278 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-22 16:58:43,324 INFO [RS:0;jenkins-hbase4:42727] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42727%2C1684774722949, suffix=, logDir=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/WALs/jenkins-hbase4.apache.org,42727,1684774722949, archiveDir=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/oldWALs, maxLogs=32 2023-05-22 16:58:43,332 INFO [RS:0;jenkins-hbase4:42727] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/WALs/jenkins-hbase4.apache.org,42727,1684774722949/jenkins-hbase4.apache.org%2C42727%2C1684774722949.1684774723325 2023-05-22 16:58:43,332 DEBUG [RS:0;jenkins-hbase4:42727] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40233,DS-19a80b28-8c32-4282-981d-30a9bb82b983,DISK], DatanodeInfoWithStorage[127.0.0.1:36275,DS-f3efe6d4-c203-4a50-bc69-93b792fcd533,DISK]] 2023-05-22 16:58:43,432 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:43,432 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-22 16:58:43,435 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40416, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-22 16:58:43,438 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-22 16:58:43,438 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-22 16:58:43,440 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42727%2C1684774722949.meta, suffix=.meta, logDir=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/WALs/jenkins-hbase4.apache.org,42727,1684774722949, archiveDir=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/oldWALs, maxLogs=32 2023-05-22 16:58:43,447 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/WALs/jenkins-hbase4.apache.org,42727,1684774722949/jenkins-hbase4.apache.org%2C42727%2C1684774722949.meta.1684774723440.meta 2023-05-22 16:58:43,447 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36275,DS-f3efe6d4-c203-4a50-bc69-93b792fcd533,DISK], DatanodeInfoWithStorage[127.0.0.1:40233,DS-19a80b28-8c32-4282-981d-30a9bb82b983,DISK]] 2023-05-22 16:58:43,447 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-22 16:58:43,447 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-22 16:58:43,447 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-22 16:58:43,447 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-22 16:58:43,447 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-22 16:58:43,447 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:58:43,447 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-22 16:58:43,447 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-22 16:58:43,448 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-22 16:58:43,449 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/1588230740/info 2023-05-22 16:58:43,449 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/1588230740/info 2023-05-22 16:58:43,450 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-22 16:58:43,450 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:58:43,450 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-22 16:58:43,451 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/1588230740/rep_barrier 2023-05-22 16:58:43,451 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/1588230740/rep_barrier 2023-05-22 16:58:43,452 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-22 16:58:43,452 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:58:43,452 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-22 16:58:43,453 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/1588230740/table 2023-05-22 16:58:43,453 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/1588230740/table 2023-05-22 16:58:43,453 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-22 16:58:43,454 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:58:43,454 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/1588230740 2023-05-22 16:58:43,455 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/1588230740 2023-05-22 16:58:43,457 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-22 16:58:43,458 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-22 16:58:43,459 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=789155, jitterRate=0.0034636259078979492}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-22 16:58:43,459 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-22 16:58:43,461 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1684774723432 2023-05-22 16:58:43,465 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-22 16:58:43,466 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-22 16:58:43,466 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42727,1684774722949, state=OPEN 2023-05-22 16:58:43,468 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-22 16:58:43,468 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-22 16:58:43,470 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-22 16:58:43,470 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42727,1684774722949 in 190 msec 2023-05-22 16:58:43,472 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-22 16:58:43,472 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 350 msec 2023-05-22 16:58:43,474 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 395 msec 2023-05-22 16:58:43,475 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1684774723475, completionTime=-1 2023-05-22 16:58:43,475 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-22 16:58:43,475 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-22 16:58:43,478 DEBUG [hconnection-0x6603554e-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-22 16:58:43,481 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40428, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-22 16:58:43,482 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-22 16:58:43,482 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1684774783482 2023-05-22 16:58:43,482 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1684774843482 2023-05-22 16:58:43,483 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 7 msec 2023-05-22 16:58:43,488 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35369,1684774722909-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-22 16:58:43,488 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35369,1684774722909-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-22 16:58:43,488 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35369,1684774722909-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-22 16:58:43,488 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:35369, period=300000, unit=MILLISECONDS is enabled. 2023-05-22 16:58:43,488 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-22 16:58:43,489 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-22 16:58:43,489 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-22 16:58:43,489 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-22 16:58:43,490 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-22 16:58:43,491 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-22 16:58:43,492 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-22 16:58:43,495 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/.tmp/data/hbase/namespace/c086c4dc57389e863217ea4c8a53092d 2023-05-22 16:58:43,495 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/.tmp/data/hbase/namespace/c086c4dc57389e863217ea4c8a53092d empty. 2023-05-22 16:58:43,496 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/.tmp/data/hbase/namespace/c086c4dc57389e863217ea4c8a53092d 2023-05-22 16:58:43,496 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-22 16:58:43,511 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-22 16:58:43,513 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => c086c4dc57389e863217ea4c8a53092d, NAME => 'hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/.tmp 2023-05-22 16:58:43,520 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:58:43,520 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing c086c4dc57389e863217ea4c8a53092d, disabling compactions & flushes 2023-05-22 16:58:43,521 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d. 2023-05-22 16:58:43,521 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d. 2023-05-22 16:58:43,521 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d. after waiting 0 ms 2023-05-22 16:58:43,521 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d. 2023-05-22 16:58:43,521 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d. 2023-05-22 16:58:43,521 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for c086c4dc57389e863217ea4c8a53092d: 2023-05-22 16:58:43,523 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-22 16:58:43,524 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684774723524"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684774723524"}]},"ts":"1684774723524"} 2023-05-22 16:58:43,527 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-22 16:58:43,528 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-22 16:58:43,528 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684774723528"}]},"ts":"1684774723528"} 2023-05-22 16:58:43,530 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-22 16:58:43,537 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c086c4dc57389e863217ea4c8a53092d, ASSIGN}] 2023-05-22 16:58:43,539 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c086c4dc57389e863217ea4c8a53092d, ASSIGN 2023-05-22 16:58:43,540 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=c086c4dc57389e863217ea4c8a53092d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42727,1684774722949; forceNewPlan=false, retain=false 2023-05-22 16:58:43,691 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c086c4dc57389e863217ea4c8a53092d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:43,691 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684774723691"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1684774723691"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684774723691"}]},"ts":"1684774723691"} 2023-05-22 16:58:43,694 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure c086c4dc57389e863217ea4c8a53092d, server=jenkins-hbase4.apache.org,42727,1684774722949}] 2023-05-22 16:58:43,850 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d. 2023-05-22 16:58:43,850 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c086c4dc57389e863217ea4c8a53092d, NAME => 'hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d.', STARTKEY => '', ENDKEY => ''} 2023-05-22 16:58:43,850 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace c086c4dc57389e863217ea4c8a53092d 2023-05-22 16:58:43,850 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:58:43,850 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c086c4dc57389e863217ea4c8a53092d 2023-05-22 16:58:43,851 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c086c4dc57389e863217ea4c8a53092d 2023-05-22 16:58:43,852 INFO [StoreOpener-c086c4dc57389e863217ea4c8a53092d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c086c4dc57389e863217ea4c8a53092d 2023-05-22 16:58:43,853 DEBUG [StoreOpener-c086c4dc57389e863217ea4c8a53092d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/namespace/c086c4dc57389e863217ea4c8a53092d/info 2023-05-22 16:58:43,853 DEBUG [StoreOpener-c086c4dc57389e863217ea4c8a53092d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/namespace/c086c4dc57389e863217ea4c8a53092d/info 2023-05-22 16:58:43,853 INFO [StoreOpener-c086c4dc57389e863217ea4c8a53092d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c086c4dc57389e863217ea4c8a53092d columnFamilyName info 2023-05-22 16:58:43,854 INFO [StoreOpener-c086c4dc57389e863217ea4c8a53092d-1] regionserver.HStore(310): Store=c086c4dc57389e863217ea4c8a53092d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:58:43,855 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/namespace/c086c4dc57389e863217ea4c8a53092d 2023-05-22 16:58:43,855 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/namespace/c086c4dc57389e863217ea4c8a53092d 2023-05-22 16:58:43,857 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c086c4dc57389e863217ea4c8a53092d 2023-05-22 16:58:43,860 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/namespace/c086c4dc57389e863217ea4c8a53092d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-22 16:58:43,861 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c086c4dc57389e863217ea4c8a53092d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=753462, jitterRate=-0.041924431920051575}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-22 16:58:43,861 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c086c4dc57389e863217ea4c8a53092d: 2023-05-22 16:58:43,862 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d., pid=6, masterSystemTime=1684774723846 2023-05-22 16:58:43,865 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d. 2023-05-22 16:58:43,865 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d. 2023-05-22 16:58:43,865 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c086c4dc57389e863217ea4c8a53092d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:43,865 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684774723865"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1684774723865"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684774723865"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684774723865"}]},"ts":"1684774723865"} 2023-05-22 16:58:43,870 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-22 16:58:43,870 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure c086c4dc57389e863217ea4c8a53092d, server=jenkins-hbase4.apache.org,42727,1684774722949 in 173 msec 2023-05-22 16:58:43,873 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-22 16:58:43,874 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=c086c4dc57389e863217ea4c8a53092d, ASSIGN in 333 msec 2023-05-22 16:58:43,874 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-22 16:58:43,875 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684774723874"}]},"ts":"1684774723874"} 2023-05-22 16:58:43,876 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-22 16:58:43,878 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-22 16:58:43,880 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 389 msec 2023-05-22 16:58:43,892 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-22 16:58:43,893 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-22 16:58:43,893 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:58:43,896 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-22 16:58:43,905 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-22 16:58:43,909 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 11 msec 2023-05-22 16:58:43,919 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-22 16:58:43,929 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-22 16:58:43,934 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 14 msec 2023-05-22 16:58:43,945 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-22 16:58:43,952 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-22 16:58:43,952 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.985sec 2023-05-22 16:58:43,952 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-22 16:58:43,952 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-22 16:58:43,952 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-22 16:58:43,953 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35369,1684774722909-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-22 16:58:43,953 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35369,1684774722909-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-22 16:58:43,962 DEBUG [Listener at localhost/43469] zookeeper.ReadOnlyZKClient(139): Connect 0x67d8f951 to 127.0.0.1:54040 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-22 16:58:43,974 DEBUG [Listener at localhost/43469] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5c9963b2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-22 16:58:43,975 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-22 16:58:43,982 DEBUG [hconnection-0x39af6214-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-22 16:58:43,985 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40444, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-22 16:58:43,987 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,35369,1684774722909 2023-05-22 16:58:43,987 INFO [Listener at localhost/43469] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:58:43,991 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-22 16:58:43,991 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:58:43,992 INFO [Listener at localhost/43469] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-22 16:58:43,994 DEBUG [Listener at localhost/43469] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-22 16:58:44,000 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52386, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-22 16:58:44,002 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-22 16:58:44,002 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-22 16:58:44,003 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-22 16:58:44,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:58:44,007 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-22 16:58:44,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testCompactionRecordDoesntBlockRolling" procId is: 9 2023-05-22 16:58:44,008 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-22 16:58:44,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-22 16:58:44,010 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92 2023-05-22 16:58:44,010 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92 empty. 2023-05-22 16:58:44,011 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92 2023-05-22 16:58:44,011 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testCompactionRecordDoesntBlockRolling regions 2023-05-22 16:58:44,035 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/.tabledesc/.tableinfo.0000000001 2023-05-22 16:58:44,036 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 91be25139f809fef2730e0b2b355ff92, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/.tmp 2023-05-22 16:58:44,053 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:58:44,053 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1604): Closing 91be25139f809fef2730e0b2b355ff92, disabling compactions & flushes 2023-05-22 16:58:44,053 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. 2023-05-22 16:58:44,053 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. 2023-05-22 16:58:44,053 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. after waiting 0 ms 2023-05-22 16:58:44,053 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. 2023-05-22 16:58:44,053 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. 2023-05-22 16:58:44,053 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1558): Region close journal for 91be25139f809fef2730e0b2b355ff92: 2023-05-22 16:58:44,056 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-22 16:58:44,057 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1684774724057"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684774724057"}]},"ts":"1684774724057"} 2023-05-22 16:58:44,060 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-22 16:58:44,061 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-22 16:58:44,061 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684774724061"}]},"ts":"1684774724061"} 2023-05-22 16:58:44,063 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLING in hbase:meta 2023-05-22 16:58:44,069 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=91be25139f809fef2730e0b2b355ff92, ASSIGN}] 2023-05-22 16:58:44,071 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=91be25139f809fef2730e0b2b355ff92, ASSIGN 2023-05-22 16:58:44,075 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=91be25139f809fef2730e0b2b355ff92, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42727,1684774722949; forceNewPlan=false, retain=false 2023-05-22 16:58:44,226 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=91be25139f809fef2730e0b2b355ff92, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:44,226 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1684774724226"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1684774724226"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684774724226"}]},"ts":"1684774724226"} 2023-05-22 16:58:44,228 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 91be25139f809fef2730e0b2b355ff92, server=jenkins-hbase4.apache.org,42727,1684774722949}] 2023-05-22 16:58:44,385 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. 2023-05-22 16:58:44,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 91be25139f809fef2730e0b2b355ff92, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92.', STARTKEY => '', ENDKEY => ''} 2023-05-22 16:58:44,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testCompactionRecordDoesntBlockRolling 91be25139f809fef2730e0b2b355ff92 2023-05-22 16:58:44,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:58:44,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 91be25139f809fef2730e0b2b355ff92 2023-05-22 16:58:44,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 91be25139f809fef2730e0b2b355ff92 2023-05-22 16:58:44,388 INFO [StoreOpener-91be25139f809fef2730e0b2b355ff92-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 91be25139f809fef2730e0b2b355ff92 2023-05-22 16:58:44,390 DEBUG [StoreOpener-91be25139f809fef2730e0b2b355ff92-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info 2023-05-22 16:58:44,390 DEBUG [StoreOpener-91be25139f809fef2730e0b2b355ff92-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info 2023-05-22 16:58:44,390 INFO [StoreOpener-91be25139f809fef2730e0b2b355ff92-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 91be25139f809fef2730e0b2b355ff92 columnFamilyName info 2023-05-22 16:58:44,391 INFO [StoreOpener-91be25139f809fef2730e0b2b355ff92-1] regionserver.HStore(310): Store=91be25139f809fef2730e0b2b355ff92/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:58:44,391 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92 2023-05-22 16:58:44,392 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92 2023-05-22 16:58:44,394 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 91be25139f809fef2730e0b2b355ff92 2023-05-22 16:58:44,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-22 16:58:44,398 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 91be25139f809fef2730e0b2b355ff92; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=725004, jitterRate=-0.07811066508293152}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-22 16:58:44,398 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 91be25139f809fef2730e0b2b355ff92: 2023-05-22 16:58:44,399 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92., pid=11, masterSystemTime=1684774724381 2023-05-22 16:58:44,401 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. 2023-05-22 16:58:44,401 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. 2023-05-22 16:58:44,401 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=91be25139f809fef2730e0b2b355ff92, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:44,402 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1684774724401"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1684774724401"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684774724401"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684774724401"}]},"ts":"1684774724401"} 2023-05-22 16:58:44,406 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-22 16:58:44,406 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 91be25139f809fef2730e0b2b355ff92, server=jenkins-hbase4.apache.org,42727,1684774722949 in 176 msec 2023-05-22 16:58:44,409 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-22 16:58:44,409 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=91be25139f809fef2730e0b2b355ff92, ASSIGN in 337 msec 2023-05-22 16:58:44,410 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-22 16:58:44,410 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684774724410"}]},"ts":"1684774724410"} 2023-05-22 16:58:44,412 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLED in hbase:meta 2023-05-22 16:58:44,416 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-22 16:58:44,418 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling in 413 msec 2023-05-22 16:58:46,971 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-22 16:58:49,195 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-22 16:58:49,195 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-22 16:58:49,196 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-22 16:58:54,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-22 16:58:54,010 INFO [Listener at localhost/43469] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testCompactionRecordDoesntBlockRolling, procId: 9 completed 2023-05-22 16:58:54,012 DEBUG [Listener at localhost/43469] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:58:54,012 DEBUG [Listener at localhost/43469] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. 2023-05-22 16:58:54,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-22 16:58:54,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] procedure.ProcedureCoordinator(165): Submitting procedure hbase:namespace 2023-05-22 16:58:54,032 INFO [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'hbase:namespace' 2023-05-22 16:58:54,032 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-22 16:58:54,033 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'hbase:namespace' starting 'acquire' 2023-05-22 16:58:54,033 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'hbase:namespace', kicking off acquire phase on members. 2023-05-22 16:58:54,033 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-05-22 16:58:54,033 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-22 16:58:54,034 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:54,034 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-22 16:58:54,035 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-22 16:58:54,035 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-22 16:58:54,035 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:54,035 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-22 16:58:54,035 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/hbase:namespace 2023-05-22 16:58:54,035 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-05-22 16:58:54,036 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-22 16:58:54,036 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-22 16:58:54,036 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for hbase:namespace 2023-05-22 16:58:54,038 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:hbase:namespace 2023-05-22 16:58:54,043 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'hbase:namespace' with timeout 60000ms 2023-05-22 16:58:54,043 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-22 16:58:54,043 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'hbase:namespace' starting 'acquire' stage 2023-05-22 16:58:54,046 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-22 16:58:54,046 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-22 16:58:54,046 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d. 2023-05-22 16:58:54,046 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d. started... 2023-05-22 16:58:54,047 INFO [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing c086c4dc57389e863217ea4c8a53092d 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-22 16:58:54,058 INFO [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/namespace/c086c4dc57389e863217ea4c8a53092d/.tmp/info/a66c1998f4af4e3d898a91e2b06bd9b7 2023-05-22 16:58:54,065 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/namespace/c086c4dc57389e863217ea4c8a53092d/.tmp/info/a66c1998f4af4e3d898a91e2b06bd9b7 as hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/namespace/c086c4dc57389e863217ea4c8a53092d/info/a66c1998f4af4e3d898a91e2b06bd9b7 2023-05-22 16:58:54,070 INFO [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/namespace/c086c4dc57389e863217ea4c8a53092d/info/a66c1998f4af4e3d898a91e2b06bd9b7, entries=2, sequenceid=6, filesize=4.8 K 2023-05-22 16:58:54,071 INFO [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for c086c4dc57389e863217ea4c8a53092d in 24ms, sequenceid=6, compaction requested=false 2023-05-22 16:58:54,072 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for c086c4dc57389e863217ea4c8a53092d: 2023-05-22 16:58:54,072 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d. 2023-05-22 16:58:54,072 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-22 16:58:54,072 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-22 16:58:54,072 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:54,072 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'hbase:namespace' locally acquired 2023-05-22 16:58:54,072 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,42727,1684774722949' joining acquired barrier for procedure (hbase:namespace) in zk 2023-05-22 16:58:54,074 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-22 16:58:54,074 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:54,074 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:54,074 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-22 16:58:54,074 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-22 16:58:54,074 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace 2023-05-22 16:58:54,074 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'hbase:namespace' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-22 16:58:54,075 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-22 16:58:54,075 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-22 16:58:54,075 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-22 16:58:54,075 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:54,076 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-22 16:58:54,076 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,42727,1684774722949' joining acquired barrier for procedure 'hbase:namespace' on coordinator 2023-05-22 16:58:54,076 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'hbase:namespace' starting 'in-barrier' execution. 2023-05-22 16:58:54,076 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@25079463[Count = 0] remaining members to acquire global barrier 2023-05-22 16:58:54,077 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-22 16:58:54,078 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-22 16:58:54,078 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-22 16:58:54,078 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-22 16:58:54,078 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'hbase:namespace' received 'reached' from coordinator. 2023-05-22 16:58:54,078 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'hbase:namespace' locally completed 2023-05-22 16:58:54,078 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'hbase:namespace' completed for member 'jenkins-hbase4.apache.org,42727,1684774722949' in zk 2023-05-22 16:58:54,078 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:54,078 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-22 16:58:54,081 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'hbase:namespace' has notified controller of completion 2023-05-22 16:58:54,081 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:54,081 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-22 16:58:54,081 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:54,081 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'hbase:namespace' completed. 2023-05-22 16:58:54,081 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-22 16:58:54,081 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-22 16:58:54,082 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-22 16:58:54,082 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-22 16:58:54,082 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-22 16:58:54,082 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:54,082 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-22 16:58:54,083 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-22 16:58:54,083 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:54,084 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'hbase:namespace' member 'jenkins-hbase4.apache.org,42727,1684774722949': 2023-05-22 16:58:54,084 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,42727,1684774722949' released barrier for procedure'hbase:namespace', counting down latch. Waiting for 0 more 2023-05-22 16:58:54,084 INFO [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'hbase:namespace' execution completed 2023-05-22 16:58:54,084 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-22 16:58:54,084 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-22 16:58:54,084 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:hbase:namespace 2023-05-22 16:58:54,084 INFO [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure hbase:namespaceincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-22 16:58:54,086 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-22 16:58:54,086 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-22 16:58:54,086 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/hbase:namespace 2023-05-22 16:58:54,086 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/hbase:namespace 2023-05-22 16:58:54,086 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-05-22 16:58:54,086 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-22 16:58:54,086 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-22 16:58:54,086 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-22 16:58:54,086 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-22 16:58:54,086 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-22 16:58:54,086 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:54,086 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-22 16:58:54,087 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-05-22 16:58:54,087 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-22 16:58:54,087 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-22 16:58:54,088 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-22 16:58:54,088 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:54,088 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:54,088 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-22 16:58:54,089 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-22 16:58:54,089 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:54,094 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:54,094 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-22 16:58:54,094 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-22 16:58:54,094 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:58:54,094 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-22 16:58:54,094 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-22 16:58:54,094 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-22 16:58:54,094 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-22 16:58:54,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'hbase:namespace' 2023-05-22 16:58:54,095 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-22 16:58:54,094 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-22 16:58:54,094 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-22 16:58:54,094 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-22 16:58:54,095 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-22 16:58:54,096 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-22 16:58:54,096 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-22 16:58:54,097 DEBUG [Listener at localhost/43469] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : hbase:namespace'' to complete. (max 20000 ms per retry) 2023-05-22 16:58:54,097 DEBUG [Listener at localhost/43469] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-22 16:59:04,097 DEBUG [Listener at localhost/43469] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-22 16:59:04,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-22 16:59:04,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-22 16:59:04,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,116 INFO [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-22 16:59:04,116 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-22 16:59:04,116 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-22 16:59:04,116 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-22 16:59:04,117 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,117 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,118 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-22 16:59:04,118 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:04,118 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-22 16:59:04,118 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-22 16:59:04,118 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:04,118 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-22 16:59:04,118 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,119 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,119 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-22 16:59:04,119 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,119 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,119 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,119 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-22 16:59:04,119 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-22 16:59:04,120 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-22 16:59:04,120 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-22 16:59:04,120 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-22 16:59:04,120 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. 2023-05-22 16:59:04,120 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. started... 2023-05-22 16:59:04,120 INFO [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 91be25139f809fef2730e0b2b355ff92 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-22 16:59:04,132 INFO [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=5 (bloomFilter=true), to=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/.tmp/info/3f9816b799314f0e8e58e41cfaa91ccc 2023-05-22 16:59:04,141 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/.tmp/info/3f9816b799314f0e8e58e41cfaa91ccc as hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info/3f9816b799314f0e8e58e41cfaa91ccc 2023-05-22 16:59:04,146 INFO [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info/3f9816b799314f0e8e58e41cfaa91ccc, entries=1, sequenceid=5, filesize=5.8 K 2023-05-22 16:59:04,147 INFO [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 91be25139f809fef2730e0b2b355ff92 in 27ms, sequenceid=5, compaction requested=false 2023-05-22 16:59:04,148 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 91be25139f809fef2730e0b2b355ff92: 2023-05-22 16:59:04,148 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. 2023-05-22 16:59:04,148 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-22 16:59:04,148 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-22 16:59:04,148 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:04,148 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-22 16:59:04,148 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,42727,1684774722949' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-22 16:59:04,151 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,151 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:04,151 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:04,151 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-22 16:59:04,152 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-22 16:59:04,152 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,152 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-22 16:59:04,152 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-22 16:59:04,152 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-22 16:59:04,152 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,153 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:04,153 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-22 16:59:04,153 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,42727,1684774722949' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-22 16:59:04,153 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@2138141b[Count = 0] remaining members to acquire global barrier 2023-05-22 16:59:04,153 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-22 16:59:04,153 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,155 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,155 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,155 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,155 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-22 16:59:04,155 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:04,155 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-22 16:59:04,155 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-22 16:59:04,155 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,42727,1684774722949' in zk 2023-05-22 16:59:04,157 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-22 16:59:04,157 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:04,157 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-22 16:59:04,157 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:04,157 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-22 16:59:04,157 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-22 16:59:04,157 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-22 16:59:04,158 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-22 16:59:04,158 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-22 16:59:04,158 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,158 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:04,159 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-22 16:59:04,159 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,159 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:04,160 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,42727,1684774722949': 2023-05-22 16:59:04,160 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,42727,1684774722949' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-22 16:59:04,160 INFO [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-22 16:59:04,160 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-22 16:59:04,160 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-22 16:59:04,160 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,160 INFO [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-22 16:59:04,165 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,165 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,165 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,165 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,165 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-22 16:59:04,165 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-22 16:59:04,165 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,165 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-22 16:59:04,165 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-22 16:59:04,165 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:04,165 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-22 16:59:04,165 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-22 16:59:04,166 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,166 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,166 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-22 16:59:04,166 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,166 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:04,167 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:04,167 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-22 16:59:04,167 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,167 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:04,169 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:04,169 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,170 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-22 16:59:04,170 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,170 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-22 16:59:04,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-22 16:59:04,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-22 16:59:04,170 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-22 16:59:04,170 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-22 16:59:04,170 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:04,170 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-22 16:59:04,170 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-22 16:59:04,170 DEBUG [Listener at localhost/43469] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-22 16:59:04,170 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-22 16:59:04,171 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-22 16:59:04,170 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,171 DEBUG [Listener at localhost/43469] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-22 16:59:04,171 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:04,171 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,171 DEBUG [Listener at localhost/43469] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-22 16:59:14,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-22 16:59:14,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-22 16:59:14,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-22 16:59:14,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,181 INFO [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-22 16:59:14,181 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-22 16:59:14,182 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-22 16:59:14,182 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-22 16:59:14,182 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,182 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,184 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:14,184 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-22 16:59:14,185 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-22 16:59:14,185 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-22 16:59:14,185 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:14,185 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-22 16:59:14,185 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,185 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,185 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-22 16:59:14,185 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,185 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,186 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-22 16:59:14,186 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,186 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-22 16:59:14,186 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-22 16:59:14,186 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-22 16:59:14,186 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-22 16:59:14,186 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-22 16:59:14,187 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. 2023-05-22 16:59:14,187 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. started... 2023-05-22 16:59:14,187 INFO [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 91be25139f809fef2730e0b2b355ff92 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-22 16:59:14,196 INFO [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/.tmp/info/9a6202747ace4c338e82ff15e3c20349 2023-05-22 16:59:14,202 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/.tmp/info/9a6202747ace4c338e82ff15e3c20349 as hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info/9a6202747ace4c338e82ff15e3c20349 2023-05-22 16:59:14,208 INFO [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info/9a6202747ace4c338e82ff15e3c20349, entries=1, sequenceid=9, filesize=5.8 K 2023-05-22 16:59:14,209 INFO [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 91be25139f809fef2730e0b2b355ff92 in 22ms, sequenceid=9, compaction requested=false 2023-05-22 16:59:14,209 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 91be25139f809fef2730e0b2b355ff92: 2023-05-22 16:59:14,209 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. 2023-05-22 16:59:14,209 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-22 16:59:14,210 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-22 16:59:14,210 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:14,210 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-22 16:59:14,210 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,42727,1684774722949' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-22 16:59:14,212 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:14,212 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,212 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:14,212 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-22 16:59:14,212 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-22 16:59:14,212 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,212 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-22 16:59:14,212 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-22 16:59:14,212 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-22 16:59:14,213 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,213 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:14,213 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-22 16:59:14,213 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,42727,1684774722949' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-22 16:59:14,213 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@643a9e58[Count = 0] remaining members to acquire global barrier 2023-05-22 16:59:14,213 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-22 16:59:14,214 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,215 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,215 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,215 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,215 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-22 16:59:14,215 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-22 16:59:14,215 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,42727,1684774722949' in zk 2023-05-22 16:59:14,215 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:14,215 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-22 16:59:14,216 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-22 16:59:14,216 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:14,216 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-22 16:59:14,217 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:14,217 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-22 16:59:14,217 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-22 16:59:14,217 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-22 16:59:14,217 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-22 16:59:14,218 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-22 16:59:14,218 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,218 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:14,218 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-22 16:59:14,219 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,219 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:14,219 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,42727,1684774722949': 2023-05-22 16:59:14,219 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,42727,1684774722949' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-22 16:59:14,219 INFO [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-22 16:59:14,219 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-22 16:59:14,219 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-22 16:59:14,219 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,220 INFO [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-22 16:59:14,227 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,227 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,227 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,227 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,227 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-22 16:59:14,227 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-22 16:59:14,227 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-22 16:59:14,227 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,227 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:14,227 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-22 16:59:14,227 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-22 16:59:14,228 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-22 16:59:14,228 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,228 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,228 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-22 16:59:14,228 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,229 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:14,229 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:14,229 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-22 16:59:14,229 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,230 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:14,232 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:14,232 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,232 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-22 16:59:14,232 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,232 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-22 16:59:14,232 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-22 16:59:14,232 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-22 16:59:14,232 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-22 16:59:14,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-22 16:59:14,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-22 16:59:14,232 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:14,232 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-22 16:59:14,232 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,233 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,233 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:14,233 DEBUG [Listener at localhost/43469] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-22 16:59:14,233 DEBUG [Listener at localhost/43469] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-22 16:59:14,233 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-22 16:59:14,233 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-22 16:59:24,233 DEBUG [Listener at localhost/43469] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-22 16:59:24,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-22 16:59:24,247 INFO [Listener at localhost/43469] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/WALs/jenkins-hbase4.apache.org,42727,1684774722949/jenkins-hbase4.apache.org%2C42727%2C1684774722949.1684774723325 with entries=13, filesize=6.44 KB; new WAL /user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/WALs/jenkins-hbase4.apache.org,42727,1684774722949/jenkins-hbase4.apache.org%2C42727%2C1684774722949.1684774764237 2023-05-22 16:59:24,248 DEBUG [Listener at localhost/43469] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40233,DS-19a80b28-8c32-4282-981d-30a9bb82b983,DISK], DatanodeInfoWithStorage[127.0.0.1:36275,DS-f3efe6d4-c203-4a50-bc69-93b792fcd533,DISK]] 2023-05-22 16:59:24,248 DEBUG [Listener at localhost/43469] wal.AbstractFSWAL(716): hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/WALs/jenkins-hbase4.apache.org,42727,1684774722949/jenkins-hbase4.apache.org%2C42727%2C1684774722949.1684774723325 is not closed yet, will try archiving it next time 2023-05-22 16:59:24,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-22 16:59:24,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-22 16:59:24,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,256 INFO [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-22 16:59:24,256 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-22 16:59:24,257 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-22 16:59:24,257 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-22 16:59:24,257 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,257 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,259 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:24,259 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-22 16:59:24,259 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-22 16:59:24,259 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-22 16:59:24,259 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:24,259 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-22 16:59:24,259 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,260 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,260 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-22 16:59:24,260 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,260 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,260 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-22 16:59:24,260 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,260 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-22 16:59:24,260 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-22 16:59:24,261 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-22 16:59:24,261 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-22 16:59:24,261 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-22 16:59:24,261 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. 2023-05-22 16:59:24,261 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. started... 2023-05-22 16:59:24,261 INFO [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 91be25139f809fef2730e0b2b355ff92 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-22 16:59:24,272 INFO [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=13 (bloomFilter=true), to=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/.tmp/info/7f77fd05d520458f942b64a2b88fe336 2023-05-22 16:59:24,279 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/.tmp/info/7f77fd05d520458f942b64a2b88fe336 as hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info/7f77fd05d520458f942b64a2b88fe336 2023-05-22 16:59:24,285 INFO [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info/7f77fd05d520458f942b64a2b88fe336, entries=1, sequenceid=13, filesize=5.8 K 2023-05-22 16:59:24,286 INFO [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 91be25139f809fef2730e0b2b355ff92 in 25ms, sequenceid=13, compaction requested=true 2023-05-22 16:59:24,286 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 91be25139f809fef2730e0b2b355ff92: 2023-05-22 16:59:24,286 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. 2023-05-22 16:59:24,286 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-22 16:59:24,286 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-22 16:59:24,286 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:24,286 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-22 16:59:24,286 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,42727,1684774722949' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-22 16:59:24,288 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:24,288 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,288 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:24,289 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-22 16:59:24,289 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-22 16:59:24,289 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,289 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-22 16:59:24,289 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-22 16:59:24,289 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-22 16:59:24,289 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,290 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:24,290 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-22 16:59:24,290 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,42727,1684774722949' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-22 16:59:24,290 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@4ca98e93[Count = 0] remaining members to acquire global barrier 2023-05-22 16:59:24,290 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-22 16:59:24,290 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,291 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,291 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,291 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,292 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-22 16:59:24,292 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-22 16:59:24,292 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:24,292 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-22 16:59:24,292 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,42727,1684774722949' in zk 2023-05-22 16:59:24,294 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:24,294 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-22 16:59:24,294 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:24,294 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-22 16:59:24,294 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-22 16:59:24,294 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-22 16:59:24,294 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-22 16:59:24,295 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-22 16:59:24,295 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-22 16:59:24,295 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,296 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:24,296 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-22 16:59:24,296 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,296 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:24,297 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,42727,1684774722949': 2023-05-22 16:59:24,297 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,42727,1684774722949' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-22 16:59:24,297 INFO [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-22 16:59:24,297 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-22 16:59:24,297 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-22 16:59:24,297 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,297 INFO [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-22 16:59:24,300 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,300 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,300 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,300 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-22 16:59:24,300 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-22 16:59:24,300 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,300 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,301 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-22 16:59:24,301 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-22 16:59:24,301 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:24,301 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-22 16:59:24,301 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-22 16:59:24,301 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,302 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,302 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-22 16:59:24,302 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,303 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:24,303 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:24,303 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-22 16:59:24,304 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,304 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:24,307 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:24,307 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-22 16:59:24,307 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-22 16:59:24,307 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,307 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-22 16:59:24,307 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-22 16:59:24,307 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-22 16:59:24,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-22 16:59:24,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-22 16:59:24,307 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-22 16:59:24,307 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,308 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:24,308 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-22 16:59:24,308 DEBUG [Listener at localhost/43469] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-22 16:59:24,308 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,308 DEBUG [Listener at localhost/43469] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-22 16:59:24,308 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-22 16:59:24,308 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:24,309 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,309 DEBUG [Listener at localhost/43469] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-22 16:59:34,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-22 16:59:34,310 DEBUG [Listener at localhost/43469] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-22 16:59:34,315 DEBUG [Listener at localhost/43469] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 17769 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-22 16:59:34,315 DEBUG [Listener at localhost/43469] regionserver.HStore(1912): 91be25139f809fef2730e0b2b355ff92/info is initiating minor compaction (all files) 2023-05-22 16:59:34,315 INFO [Listener at localhost/43469] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-22 16:59:34,315 INFO [Listener at localhost/43469] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-22 16:59:34,315 INFO [Listener at localhost/43469] regionserver.HRegion(2259): Starting compaction of 91be25139f809fef2730e0b2b355ff92/info in TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. 2023-05-22 16:59:34,315 INFO [Listener at localhost/43469] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info/3f9816b799314f0e8e58e41cfaa91ccc, hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info/9a6202747ace4c338e82ff15e3c20349, hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info/7f77fd05d520458f942b64a2b88fe336] into tmpdir=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/.tmp, totalSize=17.4 K 2023-05-22 16:59:34,316 DEBUG [Listener at localhost/43469] compactions.Compactor(207): Compacting 3f9816b799314f0e8e58e41cfaa91ccc, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=5, earliestPutTs=1684774744107 2023-05-22 16:59:34,317 DEBUG [Listener at localhost/43469] compactions.Compactor(207): Compacting 9a6202747ace4c338e82ff15e3c20349, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=9, earliestPutTs=1684774754173 2023-05-22 16:59:34,317 DEBUG [Listener at localhost/43469] compactions.Compactor(207): Compacting 7f77fd05d520458f942b64a2b88fe336, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=13, earliestPutTs=1684774764235 2023-05-22 16:59:34,328 INFO [Listener at localhost/43469] throttle.PressureAwareThroughputController(145): 91be25139f809fef2730e0b2b355ff92#info#compaction#19 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-22 16:59:34,341 DEBUG [Listener at localhost/43469] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/.tmp/info/8dc34f8b87ce4f49a3bfce9aabaa72b6 as hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info/8dc34f8b87ce4f49a3bfce9aabaa72b6 2023-05-22 16:59:34,348 INFO [Listener at localhost/43469] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 91be25139f809fef2730e0b2b355ff92/info of 91be25139f809fef2730e0b2b355ff92 into 8dc34f8b87ce4f49a3bfce9aabaa72b6(size=8.0 K), total size for store is 8.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-22 16:59:34,348 DEBUG [Listener at localhost/43469] regionserver.HRegion(2289): Compaction status journal for 91be25139f809fef2730e0b2b355ff92: 2023-05-22 16:59:34,364 INFO [Listener at localhost/43469] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/WALs/jenkins-hbase4.apache.org,42727,1684774722949/jenkins-hbase4.apache.org%2C42727%2C1684774722949.1684774764237 with entries=4, filesize=2.45 KB; new WAL /user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/WALs/jenkins-hbase4.apache.org,42727,1684774722949/jenkins-hbase4.apache.org%2C42727%2C1684774722949.1684774774350 2023-05-22 16:59:34,364 DEBUG [Listener at localhost/43469] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40233,DS-19a80b28-8c32-4282-981d-30a9bb82b983,DISK], DatanodeInfoWithStorage[127.0.0.1:36275,DS-f3efe6d4-c203-4a50-bc69-93b792fcd533,DISK]] 2023-05-22 16:59:34,364 DEBUG [Listener at localhost/43469] wal.AbstractFSWAL(716): hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/WALs/jenkins-hbase4.apache.org,42727,1684774722949/jenkins-hbase4.apache.org%2C42727%2C1684774722949.1684774764237 is not closed yet, will try archiving it next time 2023-05-22 16:59:34,365 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/WALs/jenkins-hbase4.apache.org,42727,1684774722949/jenkins-hbase4.apache.org%2C42727%2C1684774722949.1684774723325 to hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/oldWALs/jenkins-hbase4.apache.org%2C42727%2C1684774722949.1684774723325 2023-05-22 16:59:34,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-22 16:59:34,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-22 16:59:34,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,373 INFO [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-22 16:59:34,373 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-22 16:59:34,374 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-22 16:59:34,374 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-22 16:59:34,374 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,374 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,379 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:34,379 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-22 16:59:34,379 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-22 16:59:34,379 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-22 16:59:34,380 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:34,380 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-22 16:59:34,380 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,380 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,380 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-22 16:59:34,380 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,380 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,380 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-22 16:59:34,381 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,381 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-22 16:59:34,381 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-22 16:59:34,381 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-22 16:59:34,381 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-22 16:59:34,381 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-22 16:59:34,381 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. 2023-05-22 16:59:34,381 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. started... 2023-05-22 16:59:34,382 INFO [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 91be25139f809fef2730e0b2b355ff92 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-22 16:59:34,393 INFO [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=18 (bloomFilter=true), to=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/.tmp/info/5ab0d6752d254d3982a372dd9d1f3e62 2023-05-22 16:59:34,399 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/.tmp/info/5ab0d6752d254d3982a372dd9d1f3e62 as hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info/5ab0d6752d254d3982a372dd9d1f3e62 2023-05-22 16:59:34,404 INFO [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info/5ab0d6752d254d3982a372dd9d1f3e62, entries=1, sequenceid=18, filesize=5.8 K 2023-05-22 16:59:34,405 INFO [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 91be25139f809fef2730e0b2b355ff92 in 23ms, sequenceid=18, compaction requested=false 2023-05-22 16:59:34,405 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 91be25139f809fef2730e0b2b355ff92: 2023-05-22 16:59:34,405 DEBUG [rs(jenkins-hbase4.apache.org,42727,1684774722949)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. 2023-05-22 16:59:34,405 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-22 16:59:34,405 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-22 16:59:34,405 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:34,405 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-22 16:59:34,405 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,42727,1684774722949' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-22 16:59:34,408 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,408 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:34,408 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:34,408 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-22 16:59:34,408 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-22 16:59:34,408 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,408 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-22 16:59:34,408 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-22 16:59:34,408 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-22 16:59:34,409 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,409 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:34,409 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-22 16:59:34,409 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,42727,1684774722949' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-22 16:59:34,409 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@5514bfe9[Count = 0] remaining members to acquire global barrier 2023-05-22 16:59:34,409 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-22 16:59:34,410 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,411 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,411 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,411 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,411 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-22 16:59:34,411 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-22 16:59:34,411 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,42727,1684774722949' in zk 2023-05-22 16:59:34,411 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:34,411 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-22 16:59:34,413 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-22 16:59:34,413 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:34,413 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-22 16:59:34,413 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:34,414 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-22 16:59:34,414 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-22 16:59:34,413 DEBUG [member: 'jenkins-hbase4.apache.org,42727,1684774722949' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-22 16:59:34,414 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-22 16:59:34,414 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-22 16:59:34,415 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,415 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:34,415 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-22 16:59:34,415 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,416 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:34,416 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,42727,1684774722949': 2023-05-22 16:59:34,416 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,42727,1684774722949' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-22 16:59:34,416 INFO [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-22 16:59:34,416 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-22 16:59:34,416 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-22 16:59:34,416 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,416 INFO [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-22 16:59:34,419 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,419 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,419 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,419 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-22 16:59:34,419 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-22 16:59:34,419 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,419 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,419 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-22 16:59:34,419 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-22 16:59:34,419 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-22 16:59:34,420 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-22 16:59:34,419 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:34,420 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,420 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,420 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-22 16:59:34,420 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,421 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:34,421 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:34,421 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-22 16:59:34,421 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,422 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:34,424 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:34,424 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-22 16:59:34,424 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,424 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-22 16:59:34,425 DEBUG [(jenkins-hbase4.apache.org,35369,1684774722909)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-22 16:59:34,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-22 16:59:34,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-22 16:59:34,425 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,424 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-22 16:59:34,425 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:34,425 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-22 16:59:34,425 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,425 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-22 16:59:34,425 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,425 DEBUG [Listener at localhost/43469] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-22 16:59:34,426 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-22 16:59:34,426 DEBUG [Listener at localhost/43469] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-22 16:59:34,426 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-22 16:59:34,426 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-22 16:59:44,426 DEBUG [Listener at localhost/43469] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-22 16:59:44,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-22 16:59:44,438 INFO [Listener at localhost/43469] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/WALs/jenkins-hbase4.apache.org,42727,1684774722949/jenkins-hbase4.apache.org%2C42727%2C1684774722949.1684774774350 with entries=3, filesize=1.97 KB; new WAL /user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/WALs/jenkins-hbase4.apache.org,42727,1684774722949/jenkins-hbase4.apache.org%2C42727%2C1684774722949.1684774784429 2023-05-22 16:59:44,438 DEBUG [Listener at localhost/43469] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40233,DS-19a80b28-8c32-4282-981d-30a9bb82b983,DISK], DatanodeInfoWithStorage[127.0.0.1:36275,DS-f3efe6d4-c203-4a50-bc69-93b792fcd533,DISK]] 2023-05-22 16:59:44,438 DEBUG [Listener at localhost/43469] wal.AbstractFSWAL(716): hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/WALs/jenkins-hbase4.apache.org,42727,1684774722949/jenkins-hbase4.apache.org%2C42727%2C1684774722949.1684774774350 is not closed yet, will try archiving it next time 2023-05-22 16:59:44,438 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/WALs/jenkins-hbase4.apache.org,42727,1684774722949/jenkins-hbase4.apache.org%2C42727%2C1684774722949.1684774764237 to hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/oldWALs/jenkins-hbase4.apache.org%2C42727%2C1684774722949.1684774764237 2023-05-22 16:59:44,438 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-22 16:59:44,438 INFO [Listener at localhost/43469] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-22 16:59:44,438 DEBUG [Listener at localhost/43469] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x67d8f951 to 127.0.0.1:54040 2023-05-22 16:59:44,440 DEBUG [Listener at localhost/43469] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 16:59:44,440 DEBUG [Listener at localhost/43469] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-22 16:59:44,441 DEBUG [Listener at localhost/43469] util.JVMClusterUtil(257): Found active master hash=2097131534, stopped=false 2023-05-22 16:59:44,441 INFO [Listener at localhost/43469] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,35369,1684774722909 2023-05-22 16:59:44,443 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-22 16:59:44,443 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-22 16:59:44,443 INFO [Listener at localhost/43469] procedure2.ProcedureExecutor(629): Stopping 2023-05-22 16:59:44,444 DEBUG [Listener at localhost/43469] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x26802d2f to 127.0.0.1:54040 2023-05-22 16:59:44,443 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:59:44,444 DEBUG [Listener at localhost/43469] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 16:59:44,445 INFO [Listener at localhost/43469] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,42727,1684774722949' ***** 2023-05-22 16:59:44,445 INFO [Listener at localhost/43469] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-22 16:59:44,445 INFO [RS:0;jenkins-hbase4:42727] regionserver.HeapMemoryManager(220): Stopping 2023-05-22 16:59:44,446 INFO [RS:0;jenkins-hbase4:42727] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-22 16:59:44,446 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-22 16:59:44,446 INFO [RS:0;jenkins-hbase4:42727] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-22 16:59:44,446 INFO [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(3303): Received CLOSE for c086c4dc57389e863217ea4c8a53092d 2023-05-22 16:59:44,446 INFO [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(3303): Received CLOSE for 91be25139f809fef2730e0b2b355ff92 2023-05-22 16:59:44,446 INFO [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:44,446 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c086c4dc57389e863217ea4c8a53092d, disabling compactions & flushes 2023-05-22 16:59:44,447 DEBUG [RS:0;jenkins-hbase4:42727] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3316c822 to 127.0.0.1:54040 2023-05-22 16:59:44,447 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d. 2023-05-22 16:59:44,447 DEBUG [RS:0;jenkins-hbase4:42727] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 16:59:44,447 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d. 2023-05-22 16:59:44,447 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 16:59:44,447 INFO [RS:0;jenkins-hbase4:42727] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-22 16:59:44,447 INFO [RS:0;jenkins-hbase4:42727] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-22 16:59:44,447 INFO [RS:0;jenkins-hbase4:42727] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-22 16:59:44,447 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 16:59:44,447 INFO [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-22 16:59:44,447 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d. after waiting 0 ms 2023-05-22 16:59:44,447 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d. 2023-05-22 16:59:44,447 INFO [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-22 16:59:44,447 DEBUG [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1478): Online Regions={c086c4dc57389e863217ea4c8a53092d=hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d., 1588230740=hbase:meta,,1.1588230740, 91be25139f809fef2730e0b2b355ff92=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92.} 2023-05-22 16:59:44,448 DEBUG [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1504): Waiting on 1588230740, 91be25139f809fef2730e0b2b355ff92, c086c4dc57389e863217ea4c8a53092d 2023-05-22 16:59:44,448 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-22 16:59:44,448 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-22 16:59:44,448 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-22 16:59:44,448 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-22 16:59:44,448 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-22 16:59:44,449 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.10 KB heapSize=5.61 KB 2023-05-22 16:59:44,459 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/namespace/c086c4dc57389e863217ea4c8a53092d/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-22 16:59:44,460 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d. 2023-05-22 16:59:44,460 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c086c4dc57389e863217ea4c8a53092d: 2023-05-22 16:59:44,460 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1684774723489.c086c4dc57389e863217ea4c8a53092d. 2023-05-22 16:59:44,460 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 91be25139f809fef2730e0b2b355ff92, disabling compactions & flushes 2023-05-22 16:59:44,461 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. 2023-05-22 16:59:44,461 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. 2023-05-22 16:59:44,461 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. after waiting 0 ms 2023-05-22 16:59:44,461 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. 2023-05-22 16:59:44,461 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 91be25139f809fef2730e0b2b355ff92 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-22 16:59:44,472 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.84 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/1588230740/.tmp/info/2c541aa39fb846d2bca1d263dc3d98f4 2023-05-22 16:59:44,476 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=22 (bloomFilter=true), to=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/.tmp/info/9d4670788e5442dd92903ba25778ce08 2023-05-22 16:59:44,482 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/.tmp/info/9d4670788e5442dd92903ba25778ce08 as hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info/9d4670788e5442dd92903ba25778ce08 2023-05-22 16:59:44,489 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info/9d4670788e5442dd92903ba25778ce08, entries=1, sequenceid=22, filesize=5.8 K 2023-05-22 16:59:44,490 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 91be25139f809fef2730e0b2b355ff92 in 29ms, sequenceid=22, compaction requested=true 2023-05-22 16:59:44,497 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info/3f9816b799314f0e8e58e41cfaa91ccc, hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info/9a6202747ace4c338e82ff15e3c20349, hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info/7f77fd05d520458f942b64a2b88fe336] to archive 2023-05-22 16:59:44,500 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-22 16:59:44,500 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=264 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/1588230740/.tmp/table/98e8f5f6c9f44a72b739c69e39d96707 2023-05-22 16:59:44,502 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info/3f9816b799314f0e8e58e41cfaa91ccc to hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info/3f9816b799314f0e8e58e41cfaa91ccc 2023-05-22 16:59:44,504 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info/9a6202747ace4c338e82ff15e3c20349 to hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info/9a6202747ace4c338e82ff15e3c20349 2023-05-22 16:59:44,505 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info/7f77fd05d520458f942b64a2b88fe336 to hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/info/7f77fd05d520458f942b64a2b88fe336 2023-05-22 16:59:44,508 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/1588230740/.tmp/info/2c541aa39fb846d2bca1d263dc3d98f4 as hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/1588230740/info/2c541aa39fb846d2bca1d263dc3d98f4 2023-05-22 16:59:44,523 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/1588230740/info/2c541aa39fb846d2bca1d263dc3d98f4, entries=20, sequenceid=14, filesize=7.6 K 2023-05-22 16:59:44,524 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/1588230740/.tmp/table/98e8f5f6c9f44a72b739c69e39d96707 as hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/1588230740/table/98e8f5f6c9f44a72b739c69e39d96707 2023-05-22 16:59:44,530 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/91be25139f809fef2730e0b2b355ff92/recovered.edits/25.seqid, newMaxSeqId=25, maxSeqId=1 2023-05-22 16:59:44,531 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. 2023-05-22 16:59:44,531 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 91be25139f809fef2730e0b2b355ff92: 2023-05-22 16:59:44,531 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684774724002.91be25139f809fef2730e0b2b355ff92. 2023-05-22 16:59:44,532 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/1588230740/table/98e8f5f6c9f44a72b739c69e39d96707, entries=4, sequenceid=14, filesize=4.9 K 2023-05-22 16:59:44,533 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.10 KB/3174, heapSize ~5.33 KB/5456, currentSize=0 B/0 for 1588230740 in 85ms, sequenceid=14, compaction requested=false 2023-05-22 16:59:44,539 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-05-22 16:59:44,540 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-22 16:59:44,540 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-22 16:59:44,540 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-22 16:59:44,540 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-22 16:59:44,648 INFO [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42727,1684774722949; all regions closed. 2023-05-22 16:59:44,649 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/WALs/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:44,655 DEBUG [RS:0;jenkins-hbase4:42727] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/oldWALs 2023-05-22 16:59:44,655 INFO [RS:0;jenkins-hbase4:42727] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C42727%2C1684774722949.meta:.meta(num 1684774723440) 2023-05-22 16:59:44,656 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/WALs/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:44,661 DEBUG [RS:0;jenkins-hbase4:42727] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/oldWALs 2023-05-22 16:59:44,661 INFO [RS:0;jenkins-hbase4:42727] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C42727%2C1684774722949:(num 1684774784429) 2023-05-22 16:59:44,661 DEBUG [RS:0;jenkins-hbase4:42727] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 16:59:44,662 INFO [RS:0;jenkins-hbase4:42727] regionserver.LeaseManager(133): Closed leases 2023-05-22 16:59:44,662 INFO [RS:0;jenkins-hbase4:42727] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-22 16:59:44,662 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-22 16:59:44,663 INFO [RS:0;jenkins-hbase4:42727] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42727 2023-05-22 16:59:44,666 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42727,1684774722949 2023-05-22 16:59:44,666 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-22 16:59:44,666 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-22 16:59:44,668 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42727,1684774722949] 2023-05-22 16:59:44,668 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42727,1684774722949; numProcessing=1 2023-05-22 16:59:44,670 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42727,1684774722949 already deleted, retry=false 2023-05-22 16:59:44,670 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42727,1684774722949 expired; onlineServers=0 2023-05-22 16:59:44,670 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,35369,1684774722909' ***** 2023-05-22 16:59:44,670 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-22 16:59:44,670 DEBUG [M:0;jenkins-hbase4:35369] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@78c02b49, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-22 16:59:44,670 INFO [M:0;jenkins-hbase4:35369] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35369,1684774722909 2023-05-22 16:59:44,670 INFO [M:0;jenkins-hbase4:35369] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35369,1684774722909; all regions closed. 2023-05-22 16:59:44,670 DEBUG [M:0;jenkins-hbase4:35369] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 16:59:44,670 DEBUG [M:0;jenkins-hbase4:35369] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-22 16:59:44,670 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-22 16:59:44,670 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1684774723084] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1684774723084,5,FailOnTimeoutGroup] 2023-05-22 16:59:44,670 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1684774723085] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1684774723085,5,FailOnTimeoutGroup] 2023-05-22 16:59:44,670 DEBUG [M:0;jenkins-hbase4:35369] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-22 16:59:44,672 INFO [M:0;jenkins-hbase4:35369] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-22 16:59:44,672 INFO [M:0;jenkins-hbase4:35369] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-22 16:59:44,672 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-22 16:59:44,672 INFO [M:0;jenkins-hbase4:35369] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-22 16:59:44,672 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:59:44,672 DEBUG [M:0;jenkins-hbase4:35369] master.HMaster(1512): Stopping service threads 2023-05-22 16:59:44,672 INFO [M:0;jenkins-hbase4:35369] procedure2.RemoteProcedureDispatcher(118): Stopping procedure remote dispatcher 2023-05-22 16:59:44,673 ERROR [M:0;jenkins-hbase4:35369] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-22 16:59:44,673 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-22 16:59:44,673 INFO [M:0;jenkins-hbase4:35369] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-22 16:59:44,673 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-22 16:59:44,673 DEBUG [M:0;jenkins-hbase4:35369] zookeeper.ZKUtil(398): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-22 16:59:44,673 WARN [M:0;jenkins-hbase4:35369] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-22 16:59:44,673 INFO [M:0;jenkins-hbase4:35369] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-22 16:59:44,673 INFO [M:0;jenkins-hbase4:35369] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-22 16:59:44,674 DEBUG [M:0;jenkins-hbase4:35369] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-22 16:59:44,674 INFO [M:0;jenkins-hbase4:35369] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:59:44,674 DEBUG [M:0;jenkins-hbase4:35369] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:59:44,674 DEBUG [M:0;jenkins-hbase4:35369] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-22 16:59:44,674 DEBUG [M:0;jenkins-hbase4:35369] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:59:44,674 INFO [M:0;jenkins-hbase4:35369] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.89 KB heapSize=47.33 KB 2023-05-22 16:59:44,686 INFO [M:0;jenkins-hbase4:35369] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.89 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/f9173a7186d94b67b14009ae7cf160cf 2023-05-22 16:59:44,691 INFO [M:0;jenkins-hbase4:35369] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f9173a7186d94b67b14009ae7cf160cf 2023-05-22 16:59:44,691 DEBUG [M:0;jenkins-hbase4:35369] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/f9173a7186d94b67b14009ae7cf160cf as hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/f9173a7186d94b67b14009ae7cf160cf 2023-05-22 16:59:44,697 INFO [M:0;jenkins-hbase4:35369] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f9173a7186d94b67b14009ae7cf160cf 2023-05-22 16:59:44,697 INFO [M:0;jenkins-hbase4:35369] regionserver.HStore(1080): Added hdfs://localhost:35245/user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/f9173a7186d94b67b14009ae7cf160cf, entries=11, sequenceid=100, filesize=6.1 K 2023-05-22 16:59:44,698 INFO [M:0;jenkins-hbase4:35369] regionserver.HRegion(2948): Finished flush of dataSize ~38.89 KB/39824, heapSize ~47.31 KB/48448, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=100, compaction requested=false 2023-05-22 16:59:44,699 INFO [M:0;jenkins-hbase4:35369] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:59:44,699 DEBUG [M:0;jenkins-hbase4:35369] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-22 16:59:44,699 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/8196ad1b-474a-af4f-6a44-5fa8a1280110/MasterData/WALs/jenkins-hbase4.apache.org,35369,1684774722909 2023-05-22 16:59:44,702 INFO [M:0;jenkins-hbase4:35369] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-22 16:59:44,702 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-22 16:59:44,702 INFO [M:0;jenkins-hbase4:35369] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35369 2023-05-22 16:59:44,704 DEBUG [M:0;jenkins-hbase4:35369] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,35369,1684774722909 already deleted, retry=false 2023-05-22 16:59:44,769 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 16:59:44,769 INFO [RS:0;jenkins-hbase4:42727] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42727,1684774722949; zookeeper connection closed. 2023-05-22 16:59:44,769 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): regionserver:42727-0x10053d4a0510001, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 16:59:44,769 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@59c774cc] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@59c774cc 2023-05-22 16:59:44,769 INFO [Listener at localhost/43469] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-22 16:59:44,869 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 16:59:44,869 INFO [M:0;jenkins-hbase4:35369] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35369,1684774722909; zookeeper connection closed. 2023-05-22 16:59:44,869 DEBUG [Listener at localhost/43469-EventThread] zookeeper.ZKWatcher(600): master:35369-0x10053d4a0510000, quorum=127.0.0.1:54040, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 16:59:44,870 WARN [Listener at localhost/43469] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-22 16:59:44,873 WARN [BP-1282971103-172.31.14.131-1684774722353 heartbeating to localhost/127.0.0.1:35245] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1282971103-172.31.14.131-1684774722353 (Datanode Uuid 1396087b-4d56-4bd2-b72a-a5ac82a3cb78) service to localhost/127.0.0.1:35245 2023-05-22 16:59:44,873 INFO [Listener at localhost/43469] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 16:59:44,873 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/cluster_69e11878-5ed0-e17b-f7c2-286839d397f2/dfs/data/data3/current/BP-1282971103-172.31.14.131-1684774722353] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:59:44,875 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/cluster_69e11878-5ed0-e17b-f7c2-286839d397f2/dfs/data/data4/current/BP-1282971103-172.31.14.131-1684774722353] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:59:44,978 WARN [Listener at localhost/43469] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-22 16:59:44,983 INFO [Listener at localhost/43469] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 16:59:45,087 WARN [BP-1282971103-172.31.14.131-1684774722353 heartbeating to localhost/127.0.0.1:35245] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-22 16:59:45,087 WARN [BP-1282971103-172.31.14.131-1684774722353 heartbeating to localhost/127.0.0.1:35245] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1282971103-172.31.14.131-1684774722353 (Datanode Uuid cc29ae53-7830-4619-92e1-66b7b014800f) service to localhost/127.0.0.1:35245 2023-05-22 16:59:45,088 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/cluster_69e11878-5ed0-e17b-f7c2-286839d397f2/dfs/data/data1/current/BP-1282971103-172.31.14.131-1684774722353] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:59:45,088 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/cluster_69e11878-5ed0-e17b-f7c2-286839d397f2/dfs/data/data2/current/BP-1282971103-172.31.14.131-1684774722353] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 16:59:45,099 INFO [Listener at localhost/43469] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 16:59:45,202 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-22 16:59:45,212 INFO [Listener at localhost/43469] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-22 16:59:45,229 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-22 16:59:45,239 INFO [Listener at localhost/43469] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=92 (was 85) - Thread LEAK? -, OpenFileDescriptor=497 (was 463) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=59 (was 105), ProcessCount=168 (was 169), AvailableMemoryMB=4962 (was 5132) 2023-05-22 16:59:45,248 INFO [Listener at localhost/43469] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRolling Thread=93, OpenFileDescriptor=497, MaxFileDescriptor=60000, SystemLoadAverage=59, ProcessCount=168, AvailableMemoryMB=4961 2023-05-22 16:59:45,248 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-22 16:59:45,248 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/hadoop.log.dir so I do NOT create it in target/test-data/a491f768-4589-6add-f3e9-7982077e3094 2023-05-22 16:59:45,248 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/688be90a-d23d-3ddf-fc04-0355f2f11e6c/hadoop.tmp.dir so I do NOT create it in target/test-data/a491f768-4589-6add-f3e9-7982077e3094 2023-05-22 16:59:45,248 INFO [Listener at localhost/43469] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/cluster_f701e79d-cb19-583a-f783-4b93d49d8892, deleteOnExit=true 2023-05-22 16:59:45,248 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-22 16:59:45,248 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/test.cache.data in system properties and HBase conf 2023-05-22 16:59:45,249 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/hadoop.tmp.dir in system properties and HBase conf 2023-05-22 16:59:45,249 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/hadoop.log.dir in system properties and HBase conf 2023-05-22 16:59:45,249 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-22 16:59:45,249 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-22 16:59:45,249 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-22 16:59:45,249 DEBUG [Listener at localhost/43469] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-22 16:59:45,249 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-22 16:59:45,249 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-22 16:59:45,249 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-22 16:59:45,250 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-22 16:59:45,250 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-22 16:59:45,250 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-22 16:59:45,250 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-22 16:59:45,250 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-22 16:59:45,250 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-22 16:59:45,250 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/nfs.dump.dir in system properties and HBase conf 2023-05-22 16:59:45,250 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/java.io.tmpdir in system properties and HBase conf 2023-05-22 16:59:45,250 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-22 16:59:45,250 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-22 16:59:45,251 INFO [Listener at localhost/43469] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-22 16:59:45,252 WARN [Listener at localhost/43469] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-22 16:59:45,255 WARN [Listener at localhost/43469] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-22 16:59:45,255 WARN [Listener at localhost/43469] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-22 16:59:45,304 WARN [Listener at localhost/43469] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 16:59:45,305 INFO [Listener at localhost/43469] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 16:59:45,310 INFO [Listener at localhost/43469] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/java.io.tmpdir/Jetty_localhost_38355_hdfs____opo4e0/webapp 2023-05-22 16:59:45,401 INFO [Listener at localhost/43469] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38355 2023-05-22 16:59:45,403 WARN [Listener at localhost/43469] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-22 16:59:45,406 WARN [Listener at localhost/43469] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-22 16:59:45,406 WARN [Listener at localhost/43469] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-22 16:59:45,444 WARN [Listener at localhost/36769] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 16:59:45,453 WARN [Listener at localhost/36769] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-22 16:59:45,455 WARN [Listener at localhost/36769] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 16:59:45,456 INFO [Listener at localhost/36769] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 16:59:45,461 INFO [Listener at localhost/36769] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/java.io.tmpdir/Jetty_localhost_45169_datanode____.qy8kqu/webapp 2023-05-22 16:59:45,551 INFO [Listener at localhost/36769] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45169 2023-05-22 16:59:45,557 WARN [Listener at localhost/41761] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 16:59:45,568 WARN [Listener at localhost/41761] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-22 16:59:45,570 WARN [Listener at localhost/41761] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 16:59:45,571 INFO [Listener at localhost/41761] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 16:59:45,575 INFO [Listener at localhost/41761] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/java.io.tmpdir/Jetty_localhost_34561_datanode____qml6vq/webapp 2023-05-22 16:59:45,652 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x952084a08d8251f3: Processing first storage report for DS-497a699e-7a0b-4aaa-b587-71fd07ebcc52 from datanode 78cbd99a-b923-485c-85ac-848357f5bd3b 2023-05-22 16:59:45,652 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x952084a08d8251f3: from storage DS-497a699e-7a0b-4aaa-b587-71fd07ebcc52 node DatanodeRegistration(127.0.0.1:37637, datanodeUuid=78cbd99a-b923-485c-85ac-848357f5bd3b, infoPort=36421, infoSecurePort=0, ipcPort=41761, storageInfo=lv=-57;cid=testClusterID;nsid=441551536;c=1684774785257), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:59:45,652 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x952084a08d8251f3: Processing first storage report for DS-de5faeee-f597-418c-9973-ec6df9067d93 from datanode 78cbd99a-b923-485c-85ac-848357f5bd3b 2023-05-22 16:59:45,652 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x952084a08d8251f3: from storage DS-de5faeee-f597-418c-9973-ec6df9067d93 node DatanodeRegistration(127.0.0.1:37637, datanodeUuid=78cbd99a-b923-485c-85ac-848357f5bd3b, infoPort=36421, infoSecurePort=0, ipcPort=41761, storageInfo=lv=-57;cid=testClusterID;nsid=441551536;c=1684774785257), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:59:45,680 INFO [Listener at localhost/41761] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34561 2023-05-22 16:59:45,689 WARN [Listener at localhost/39181] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 16:59:45,781 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd6a308213f94e6b: Processing first storage report for DS-a4abff20-5e75-4147-9ac0-3584bbb270d7 from datanode 0ee17268-cba2-4c4f-b605-f8248a399180 2023-05-22 16:59:45,781 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd6a308213f94e6b: from storage DS-a4abff20-5e75-4147-9ac0-3584bbb270d7 node DatanodeRegistration(127.0.0.1:34449, datanodeUuid=0ee17268-cba2-4c4f-b605-f8248a399180, infoPort=46775, infoSecurePort=0, ipcPort=39181, storageInfo=lv=-57;cid=testClusterID;nsid=441551536;c=1684774785257), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:59:45,781 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd6a308213f94e6b: Processing first storage report for DS-e0a2b1d1-051e-4b56-8b9d-c94428cad35a from datanode 0ee17268-cba2-4c4f-b605-f8248a399180 2023-05-22 16:59:45,781 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd6a308213f94e6b: from storage DS-e0a2b1d1-051e-4b56-8b9d-c94428cad35a node DatanodeRegistration(127.0.0.1:34449, datanodeUuid=0ee17268-cba2-4c4f-b605-f8248a399180, infoPort=46775, infoSecurePort=0, ipcPort=39181, storageInfo=lv=-57;cid=testClusterID;nsid=441551536;c=1684774785257), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 16:59:45,799 DEBUG [Listener at localhost/39181] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094 2023-05-22 16:59:45,801 INFO [Listener at localhost/39181] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/cluster_f701e79d-cb19-583a-f783-4b93d49d8892/zookeeper_0, clientPort=61632, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/cluster_f701e79d-cb19-583a-f783-4b93d49d8892/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/cluster_f701e79d-cb19-583a-f783-4b93d49d8892/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-22 16:59:45,802 INFO [Listener at localhost/39181] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=61632 2023-05-22 16:59:45,803 INFO [Listener at localhost/39181] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:59:45,804 INFO [Listener at localhost/39181] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:59:45,819 INFO [Listener at localhost/39181] util.FSUtils(471): Created version file at hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f with version=8 2023-05-22 16:59:45,819 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/hbase-staging 2023-05-22 16:59:45,822 INFO [Listener at localhost/39181] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-22 16:59:45,822 INFO [Listener at localhost/39181] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 16:59:45,822 INFO [Listener at localhost/39181] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-22 16:59:45,822 INFO [Listener at localhost/39181] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-22 16:59:45,822 INFO [Listener at localhost/39181] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 16:59:45,822 INFO [Listener at localhost/39181] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-22 16:59:45,823 INFO [Listener at localhost/39181] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-22 16:59:45,824 INFO [Listener at localhost/39181] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38619 2023-05-22 16:59:45,824 INFO [Listener at localhost/39181] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:59:45,826 INFO [Listener at localhost/39181] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:59:45,827 INFO [Listener at localhost/39181] zookeeper.RecoverableZooKeeper(93): Process identifier=master:38619 connecting to ZooKeeper ensemble=127.0.0.1:61632 2023-05-22 16:59:45,834 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:386190x0, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-22 16:59:45,835 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:38619-0x10053d5960c0000 connected 2023-05-22 16:59:45,850 DEBUG [Listener at localhost/39181] zookeeper.ZKUtil(164): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-22 16:59:45,850 DEBUG [Listener at localhost/39181] zookeeper.ZKUtil(164): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 16:59:45,851 DEBUG [Listener at localhost/39181] zookeeper.ZKUtil(164): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-22 16:59:45,852 DEBUG [Listener at localhost/39181] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38619 2023-05-22 16:59:45,852 DEBUG [Listener at localhost/39181] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38619 2023-05-22 16:59:45,852 DEBUG [Listener at localhost/39181] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38619 2023-05-22 16:59:45,852 DEBUG [Listener at localhost/39181] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38619 2023-05-22 16:59:45,852 DEBUG [Listener at localhost/39181] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38619 2023-05-22 16:59:45,853 INFO [Listener at localhost/39181] master.HMaster(444): hbase.rootdir=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f, hbase.cluster.distributed=false 2023-05-22 16:59:45,866 INFO [Listener at localhost/39181] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-22 16:59:45,866 INFO [Listener at localhost/39181] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 16:59:45,866 INFO [Listener at localhost/39181] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-22 16:59:45,866 INFO [Listener at localhost/39181] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-22 16:59:45,866 INFO [Listener at localhost/39181] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 16:59:45,867 INFO [Listener at localhost/39181] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-22 16:59:45,867 INFO [Listener at localhost/39181] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-22 16:59:45,868 INFO [Listener at localhost/39181] ipc.NettyRpcServer(120): Bind to /172.31.14.131:32813 2023-05-22 16:59:45,868 INFO [Listener at localhost/39181] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-22 16:59:45,869 DEBUG [Listener at localhost/39181] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-22 16:59:45,870 INFO [Listener at localhost/39181] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:59:45,871 INFO [Listener at localhost/39181] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:59:45,872 INFO [Listener at localhost/39181] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:32813 connecting to ZooKeeper ensemble=127.0.0.1:61632 2023-05-22 16:59:45,875 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): regionserver:328130x0, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-22 16:59:45,878 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:32813-0x10053d5960c0001 connected 2023-05-22 16:59:45,878 DEBUG [Listener at localhost/39181] zookeeper.ZKUtil(164): regionserver:32813-0x10053d5960c0001, quorum=127.0.0.1:61632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-22 16:59:45,879 DEBUG [Listener at localhost/39181] zookeeper.ZKUtil(164): regionserver:32813-0x10053d5960c0001, quorum=127.0.0.1:61632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 16:59:45,880 DEBUG [Listener at localhost/39181] zookeeper.ZKUtil(164): regionserver:32813-0x10053d5960c0001, quorum=127.0.0.1:61632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-22 16:59:45,880 DEBUG [Listener at localhost/39181] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=32813 2023-05-22 16:59:45,881 DEBUG [Listener at localhost/39181] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=32813 2023-05-22 16:59:45,881 DEBUG [Listener at localhost/39181] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=32813 2023-05-22 16:59:45,881 DEBUG [Listener at localhost/39181] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=32813 2023-05-22 16:59:45,881 DEBUG [Listener at localhost/39181] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=32813 2023-05-22 16:59:45,885 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,38619,1684774785821 2023-05-22 16:59:45,887 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-22 16:59:45,887 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,38619,1684774785821 2023-05-22 16:59:45,888 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-22 16:59:45,888 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): regionserver:32813-0x10053d5960c0001, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-22 16:59:45,888 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:59:45,889 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-22 16:59:45,890 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-22 16:59:45,890 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,38619,1684774785821 from backup master directory 2023-05-22 16:59:45,893 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,38619,1684774785821 2023-05-22 16:59:45,893 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-22 16:59:45,893 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-22 16:59:45,893 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,38619,1684774785821 2023-05-22 16:59:45,906 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/hbase.id with ID: fb2603d9-e7f4-403d-a261-db21017953bb 2023-05-22 16:59:45,919 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:59:45,921 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:59:45,930 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x7eae862b to 127.0.0.1:61632 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-22 16:59:45,934 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@44817141, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-22 16:59:45,934 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-22 16:59:45,934 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-22 16:59:45,935 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-22 16:59:45,936 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/MasterData/data/master/store-tmp 2023-05-22 16:59:45,947 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:59:45,947 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-22 16:59:45,947 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:59:45,947 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:59:45,947 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-22 16:59:45,947 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:59:45,947 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 16:59:45,947 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-22 16:59:45,948 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/MasterData/WALs/jenkins-hbase4.apache.org,38619,1684774785821 2023-05-22 16:59:45,950 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38619%2C1684774785821, suffix=, logDir=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/MasterData/WALs/jenkins-hbase4.apache.org,38619,1684774785821, archiveDir=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/MasterData/oldWALs, maxLogs=10 2023-05-22 16:59:45,959 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/MasterData/WALs/jenkins-hbase4.apache.org,38619,1684774785821/jenkins-hbase4.apache.org%2C38619%2C1684774785821.1684774785951 2023-05-22 16:59:45,959 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34449,DS-a4abff20-5e75-4147-9ac0-3584bbb270d7,DISK], DatanodeInfoWithStorage[127.0.0.1:37637,DS-497a699e-7a0b-4aaa-b587-71fd07ebcc52,DISK]] 2023-05-22 16:59:45,959 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-22 16:59:45,959 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:59:45,960 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:59:45,960 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:59:45,961 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:59:45,963 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-22 16:59:45,963 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-22 16:59:45,964 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:59:45,964 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:59:45,965 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:59:45,967 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-22 16:59:45,970 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-22 16:59:45,970 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=849354, jitterRate=0.08001033961772919}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-22 16:59:45,970 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-22 16:59:45,970 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-22 16:59:45,971 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-22 16:59:45,972 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-22 16:59:45,972 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-22 16:59:45,972 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-22 16:59:45,972 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-22 16:59:45,972 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(95): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-22 16:59:45,973 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-22 16:59:45,974 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-22 16:59:45,985 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-22 16:59:45,985 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-22 16:59:45,986 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-22 16:59:45,986 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-22 16:59:45,986 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-22 16:59:45,989 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:59:45,989 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-22 16:59:45,989 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-22 16:59:45,990 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-22 16:59:45,994 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-22 16:59:45,994 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): regionserver:32813-0x10053d5960c0001, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-22 16:59:45,994 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:59:45,995 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,38619,1684774785821, sessionid=0x10053d5960c0000, setting cluster-up flag (Was=false) 2023-05-22 16:59:46,004 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:59:46,008 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-22 16:59:46,009 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,38619,1684774785821 2023-05-22 16:59:46,011 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:59:46,016 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-22 16:59:46,017 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,38619,1684774785821 2023-05-22 16:59:46,017 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/.hbase-snapshot/.tmp 2023-05-22 16:59:46,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-22 16:59:46,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-22 16:59:46,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-22 16:59:46,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-22 16:59:46,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-22 16:59:46,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-22 16:59:46,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:59:46,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-22 16:59:46,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:59:46,021 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1684774816021 2023-05-22 16:59:46,021 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-22 16:59:46,021 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-22 16:59:46,021 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-22 16:59:46,021 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-22 16:59:46,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-22 16:59:46,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-22 16:59:46,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-22 16:59:46,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-22 16:59:46,022 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-22 16:59:46,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-22 16:59:46,022 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-22 16:59:46,023 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-22 16:59:46,023 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-22 16:59:46,023 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-22 16:59:46,023 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1684774786023,5,FailOnTimeoutGroup] 2023-05-22 16:59:46,023 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1684774786023,5,FailOnTimeoutGroup] 2023-05-22 16:59:46,023 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-22 16:59:46,023 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-22 16:59:46,023 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-22 16:59:46,023 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-22 16:59:46,024 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-22 16:59:46,034 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-22 16:59:46,035 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-22 16:59:46,035 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f 2023-05-22 16:59:46,042 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:59:46,043 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-22 16:59:46,044 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740/info 2023-05-22 16:59:46,044 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-22 16:59:46,045 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:59:46,045 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-22 16:59:46,046 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740/rep_barrier 2023-05-22 16:59:46,047 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-22 16:59:46,047 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:59:46,047 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-22 16:59:46,049 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740/table 2023-05-22 16:59:46,049 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-22 16:59:46,049 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:59:46,050 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740 2023-05-22 16:59:46,051 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740 2023-05-22 16:59:46,052 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-22 16:59:46,054 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-22 16:59:46,056 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-22 16:59:46,057 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=740733, jitterRate=-0.05811038613319397}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-22 16:59:46,057 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-22 16:59:46,057 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-22 16:59:46,057 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-22 16:59:46,057 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-22 16:59:46,057 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-22 16:59:46,057 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-22 16:59:46,059 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-22 16:59:46,059 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-22 16:59:46,060 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-22 16:59:46,060 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-22 16:59:46,060 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-22 16:59:46,062 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-22 16:59:46,063 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-22 16:59:46,083 INFO [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(951): ClusterId : fb2603d9-e7f4-403d-a261-db21017953bb 2023-05-22 16:59:46,084 DEBUG [RS:0;jenkins-hbase4:32813] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-22 16:59:46,086 DEBUG [RS:0;jenkins-hbase4:32813] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-22 16:59:46,086 DEBUG [RS:0;jenkins-hbase4:32813] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-22 16:59:46,089 DEBUG [RS:0;jenkins-hbase4:32813] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-22 16:59:46,090 DEBUG [RS:0;jenkins-hbase4:32813] zookeeper.ReadOnlyZKClient(139): Connect 0x0dd9780c to 127.0.0.1:61632 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-22 16:59:46,093 DEBUG [RS:0;jenkins-hbase4:32813] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@31e87c7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-22 16:59:46,093 DEBUG [RS:0;jenkins-hbase4:32813] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@721137c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-22 16:59:46,102 DEBUG [RS:0;jenkins-hbase4:32813] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:32813 2023-05-22 16:59:46,102 INFO [RS:0;jenkins-hbase4:32813] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-22 16:59:46,102 INFO [RS:0;jenkins-hbase4:32813] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-22 16:59:46,102 DEBUG [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1022): About to register with Master. 2023-05-22 16:59:46,103 INFO [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,38619,1684774785821 with isa=jenkins-hbase4.apache.org/172.31.14.131:32813, startcode=1684774785866 2023-05-22 16:59:46,103 DEBUG [RS:0;jenkins-hbase4:32813] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-22 16:59:46,106 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37207, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-05-22 16:59:46,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38619] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 16:59:46,107 DEBUG [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f 2023-05-22 16:59:46,107 DEBUG [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36769 2023-05-22 16:59:46,107 DEBUG [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-22 16:59:46,109 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-22 16:59:46,109 DEBUG [RS:0;jenkins-hbase4:32813] zookeeper.ZKUtil(162): regionserver:32813-0x10053d5960c0001, quorum=127.0.0.1:61632, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 16:59:46,109 WARN [RS:0;jenkins-hbase4:32813] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-22 16:59:46,109 INFO [RS:0;jenkins-hbase4:32813] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-22 16:59:46,109 DEBUG [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1946): logDir=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/WALs/jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 16:59:46,110 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,32813,1684774785866] 2023-05-22 16:59:46,115 DEBUG [RS:0;jenkins-hbase4:32813] zookeeper.ZKUtil(162): regionserver:32813-0x10053d5960c0001, quorum=127.0.0.1:61632, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 16:59:46,116 DEBUG [RS:0;jenkins-hbase4:32813] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-22 16:59:46,116 INFO [RS:0;jenkins-hbase4:32813] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-22 16:59:46,117 INFO [RS:0;jenkins-hbase4:32813] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-22 16:59:46,117 INFO [RS:0;jenkins-hbase4:32813] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-22 16:59:46,118 INFO [RS:0;jenkins-hbase4:32813] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-22 16:59:46,118 INFO [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-22 16:59:46,119 INFO [RS:0;jenkins-hbase4:32813] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-22 16:59:46,119 DEBUG [RS:0;jenkins-hbase4:32813] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:59:46,119 DEBUG [RS:0;jenkins-hbase4:32813] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:59:46,119 DEBUG [RS:0;jenkins-hbase4:32813] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:59:46,119 DEBUG [RS:0;jenkins-hbase4:32813] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:59:46,119 DEBUG [RS:0;jenkins-hbase4:32813] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:59:46,119 DEBUG [RS:0;jenkins-hbase4:32813] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-22 16:59:46,120 DEBUG [RS:0;jenkins-hbase4:32813] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:59:46,120 DEBUG [RS:0;jenkins-hbase4:32813] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:59:46,120 DEBUG [RS:0;jenkins-hbase4:32813] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:59:46,120 DEBUG [RS:0;jenkins-hbase4:32813] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 16:59:46,120 INFO [RS:0;jenkins-hbase4:32813] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-22 16:59:46,120 INFO [RS:0;jenkins-hbase4:32813] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-22 16:59:46,121 INFO [RS:0;jenkins-hbase4:32813] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-22 16:59:46,132 INFO [RS:0;jenkins-hbase4:32813] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-22 16:59:46,132 INFO [RS:0;jenkins-hbase4:32813] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,32813,1684774785866-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-22 16:59:46,143 INFO [RS:0;jenkins-hbase4:32813] regionserver.Replication(203): jenkins-hbase4.apache.org,32813,1684774785866 started 2023-05-22 16:59:46,143 INFO [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,32813,1684774785866, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:32813, sessionid=0x10053d5960c0001 2023-05-22 16:59:46,143 DEBUG [RS:0;jenkins-hbase4:32813] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-22 16:59:46,143 DEBUG [RS:0;jenkins-hbase4:32813] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 16:59:46,143 DEBUG [RS:0;jenkins-hbase4:32813] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32813,1684774785866' 2023-05-22 16:59:46,143 DEBUG [RS:0;jenkins-hbase4:32813] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-22 16:59:46,143 DEBUG [RS:0;jenkins-hbase4:32813] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-22 16:59:46,143 DEBUG [RS:0;jenkins-hbase4:32813] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-22 16:59:46,144 DEBUG [RS:0;jenkins-hbase4:32813] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-22 16:59:46,144 DEBUG [RS:0;jenkins-hbase4:32813] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 16:59:46,144 DEBUG [RS:0;jenkins-hbase4:32813] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32813,1684774785866' 2023-05-22 16:59:46,144 DEBUG [RS:0;jenkins-hbase4:32813] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-22 16:59:46,144 DEBUG [RS:0;jenkins-hbase4:32813] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-22 16:59:46,144 DEBUG [RS:0;jenkins-hbase4:32813] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-22 16:59:46,144 INFO [RS:0;jenkins-hbase4:32813] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-22 16:59:46,144 INFO [RS:0;jenkins-hbase4:32813] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-22 16:59:46,213 DEBUG [jenkins-hbase4:38619] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-22 16:59:46,214 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,32813,1684774785866, state=OPENING 2023-05-22 16:59:46,216 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-22 16:59:46,218 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:59:46,219 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,32813,1684774785866}] 2023-05-22 16:59:46,219 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-22 16:59:46,246 INFO [RS:0;jenkins-hbase4:32813] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C32813%2C1684774785866, suffix=, logDir=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/WALs/jenkins-hbase4.apache.org,32813,1684774785866, archiveDir=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/oldWALs, maxLogs=32 2023-05-22 16:59:46,264 INFO [RS:0;jenkins-hbase4:32813] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/WALs/jenkins-hbase4.apache.org,32813,1684774785866/jenkins-hbase4.apache.org%2C32813%2C1684774785866.1684774786247 2023-05-22 16:59:46,264 DEBUG [RS:0;jenkins-hbase4:32813] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34449,DS-a4abff20-5e75-4147-9ac0-3584bbb270d7,DISK], DatanodeInfoWithStorage[127.0.0.1:37637,DS-497a699e-7a0b-4aaa-b587-71fd07ebcc52,DISK]] 2023-05-22 16:59:46,373 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 16:59:46,373 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-22 16:59:46,376 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50144, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-22 16:59:46,379 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-22 16:59:46,379 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-22 16:59:46,381 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C32813%2C1684774785866.meta, suffix=.meta, logDir=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/WALs/jenkins-hbase4.apache.org,32813,1684774785866, archiveDir=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/oldWALs, maxLogs=32 2023-05-22 16:59:46,389 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/WALs/jenkins-hbase4.apache.org,32813,1684774785866/jenkins-hbase4.apache.org%2C32813%2C1684774785866.meta.1684774786382.meta 2023-05-22 16:59:46,389 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37637,DS-497a699e-7a0b-4aaa-b587-71fd07ebcc52,DISK], DatanodeInfoWithStorage[127.0.0.1:34449,DS-a4abff20-5e75-4147-9ac0-3584bbb270d7,DISK]] 2023-05-22 16:59:46,389 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-22 16:59:46,390 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-22 16:59:46,390 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-22 16:59:46,390 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-22 16:59:46,390 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-22 16:59:46,390 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:59:46,390 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-22 16:59:46,390 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-22 16:59:46,391 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-22 16:59:46,392 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740/info 2023-05-22 16:59:46,392 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740/info 2023-05-22 16:59:46,392 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-22 16:59:46,393 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:59:46,393 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-22 16:59:46,394 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740/rep_barrier 2023-05-22 16:59:46,394 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740/rep_barrier 2023-05-22 16:59:46,394 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-22 16:59:46,395 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:59:46,395 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-22 16:59:46,396 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740/table 2023-05-22 16:59:46,396 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740/table 2023-05-22 16:59:46,396 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-22 16:59:46,397 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:59:46,397 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740 2023-05-22 16:59:46,398 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740 2023-05-22 16:59:46,400 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-22 16:59:46,401 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-22 16:59:46,402 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=732244, jitterRate=-0.06890438497066498}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-22 16:59:46,402 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-22 16:59:46,405 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1684774786373 2023-05-22 16:59:46,409 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-22 16:59:46,410 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-22 16:59:46,410 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,32813,1684774785866, state=OPEN 2023-05-22 16:59:46,412 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-22 16:59:46,412 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-22 16:59:46,415 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-22 16:59:46,415 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,32813,1684774785866 in 194 msec 2023-05-22 16:59:46,417 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-22 16:59:46,417 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 355 msec 2023-05-22 16:59:46,420 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 400 msec 2023-05-22 16:59:46,420 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1684774786420, completionTime=-1 2023-05-22 16:59:46,420 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-22 16:59:46,420 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-22 16:59:46,423 DEBUG [hconnection-0x70fc5c9a-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-22 16:59:46,426 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50160, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-22 16:59:46,427 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-22 16:59:46,427 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1684774846427 2023-05-22 16:59:46,427 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1684774906427 2023-05-22 16:59:46,427 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-22 16:59:46,435 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38619,1684774785821-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-22 16:59:46,435 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38619,1684774785821-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-22 16:59:46,435 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38619,1684774785821-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-22 16:59:46,435 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:38619, period=300000, unit=MILLISECONDS is enabled. 2023-05-22 16:59:46,435 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-22 16:59:46,435 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-22 16:59:46,435 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-22 16:59:46,436 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-22 16:59:46,437 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-22 16:59:46,438 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-22 16:59:46,439 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-22 16:59:46,446 DEBUG [HFileArchiver-9] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/.tmp/data/hbase/namespace/43b2845b87cd14d2606abdf4e671ea3a 2023-05-22 16:59:46,446 DEBUG [HFileArchiver-9] backup.HFileArchiver(153): Directory hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/.tmp/data/hbase/namespace/43b2845b87cd14d2606abdf4e671ea3a empty. 2023-05-22 16:59:46,447 DEBUG [HFileArchiver-9] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/.tmp/data/hbase/namespace/43b2845b87cd14d2606abdf4e671ea3a 2023-05-22 16:59:46,447 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-22 16:59:46,460 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-22 16:59:46,461 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 43b2845b87cd14d2606abdf4e671ea3a, NAME => 'hbase:namespace,,1684774786435.43b2845b87cd14d2606abdf4e671ea3a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/.tmp 2023-05-22 16:59:46,467 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684774786435.43b2845b87cd14d2606abdf4e671ea3a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:59:46,467 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 43b2845b87cd14d2606abdf4e671ea3a, disabling compactions & flushes 2023-05-22 16:59:46,467 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684774786435.43b2845b87cd14d2606abdf4e671ea3a. 2023-05-22 16:59:46,467 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684774786435.43b2845b87cd14d2606abdf4e671ea3a. 2023-05-22 16:59:46,467 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684774786435.43b2845b87cd14d2606abdf4e671ea3a. after waiting 0 ms 2023-05-22 16:59:46,467 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684774786435.43b2845b87cd14d2606abdf4e671ea3a. 2023-05-22 16:59:46,467 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684774786435.43b2845b87cd14d2606abdf4e671ea3a. 2023-05-22 16:59:46,467 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 43b2845b87cd14d2606abdf4e671ea3a: 2023-05-22 16:59:46,470 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-22 16:59:46,470 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1684774786435.43b2845b87cd14d2606abdf4e671ea3a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684774786470"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684774786470"}]},"ts":"1684774786470"} 2023-05-22 16:59:46,472 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-22 16:59:46,473 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-22 16:59:46,473 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684774786473"}]},"ts":"1684774786473"} 2023-05-22 16:59:46,475 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-22 16:59:46,481 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=43b2845b87cd14d2606abdf4e671ea3a, ASSIGN}] 2023-05-22 16:59:46,483 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=43b2845b87cd14d2606abdf4e671ea3a, ASSIGN 2023-05-22 16:59:46,484 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=43b2845b87cd14d2606abdf4e671ea3a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,32813,1684774785866; forceNewPlan=false, retain=false 2023-05-22 16:59:46,635 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=43b2845b87cd14d2606abdf4e671ea3a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 16:59:46,635 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1684774786435.43b2845b87cd14d2606abdf4e671ea3a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684774786635"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1684774786635"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684774786635"}]},"ts":"1684774786635"} 2023-05-22 16:59:46,637 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 43b2845b87cd14d2606abdf4e671ea3a, server=jenkins-hbase4.apache.org,32813,1684774785866}] 2023-05-22 16:59:46,792 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1684774786435.43b2845b87cd14d2606abdf4e671ea3a. 2023-05-22 16:59:46,793 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 43b2845b87cd14d2606abdf4e671ea3a, NAME => 'hbase:namespace,,1684774786435.43b2845b87cd14d2606abdf4e671ea3a.', STARTKEY => '', ENDKEY => ''} 2023-05-22 16:59:46,793 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 43b2845b87cd14d2606abdf4e671ea3a 2023-05-22 16:59:46,793 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684774786435.43b2845b87cd14d2606abdf4e671ea3a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:59:46,793 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 43b2845b87cd14d2606abdf4e671ea3a 2023-05-22 16:59:46,793 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 43b2845b87cd14d2606abdf4e671ea3a 2023-05-22 16:59:46,794 INFO [StoreOpener-43b2845b87cd14d2606abdf4e671ea3a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 43b2845b87cd14d2606abdf4e671ea3a 2023-05-22 16:59:46,796 DEBUG [StoreOpener-43b2845b87cd14d2606abdf4e671ea3a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/namespace/43b2845b87cd14d2606abdf4e671ea3a/info 2023-05-22 16:59:46,796 DEBUG [StoreOpener-43b2845b87cd14d2606abdf4e671ea3a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/namespace/43b2845b87cd14d2606abdf4e671ea3a/info 2023-05-22 16:59:46,796 INFO [StoreOpener-43b2845b87cd14d2606abdf4e671ea3a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 43b2845b87cd14d2606abdf4e671ea3a columnFamilyName info 2023-05-22 16:59:46,796 INFO [StoreOpener-43b2845b87cd14d2606abdf4e671ea3a-1] regionserver.HStore(310): Store=43b2845b87cd14d2606abdf4e671ea3a/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:59:46,797 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/namespace/43b2845b87cd14d2606abdf4e671ea3a 2023-05-22 16:59:46,798 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/namespace/43b2845b87cd14d2606abdf4e671ea3a 2023-05-22 16:59:46,800 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 43b2845b87cd14d2606abdf4e671ea3a 2023-05-22 16:59:46,802 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/namespace/43b2845b87cd14d2606abdf4e671ea3a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-22 16:59:46,803 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 43b2845b87cd14d2606abdf4e671ea3a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=701699, jitterRate=-0.1077439934015274}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-22 16:59:46,803 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 43b2845b87cd14d2606abdf4e671ea3a: 2023-05-22 16:59:46,804 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1684774786435.43b2845b87cd14d2606abdf4e671ea3a., pid=6, masterSystemTime=1684774786789 2023-05-22 16:59:46,807 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1684774786435.43b2845b87cd14d2606abdf4e671ea3a. 2023-05-22 16:59:46,807 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1684774786435.43b2845b87cd14d2606abdf4e671ea3a. 2023-05-22 16:59:46,807 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=43b2845b87cd14d2606abdf4e671ea3a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 16:59:46,807 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1684774786435.43b2845b87cd14d2606abdf4e671ea3a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684774786807"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1684774786807"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684774786807"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684774786807"}]},"ts":"1684774786807"} 2023-05-22 16:59:46,811 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-22 16:59:46,811 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 43b2845b87cd14d2606abdf4e671ea3a, server=jenkins-hbase4.apache.org,32813,1684774785866 in 172 msec 2023-05-22 16:59:46,813 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-22 16:59:46,813 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=43b2845b87cd14d2606abdf4e671ea3a, ASSIGN in 330 msec 2023-05-22 16:59:46,814 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-22 16:59:46,814 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684774786814"}]},"ts":"1684774786814"} 2023-05-22 16:59:46,815 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-22 16:59:46,817 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-22 16:59:46,819 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 382 msec 2023-05-22 16:59:46,838 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-22 16:59:46,839 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-22 16:59:46,839 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:59:46,843 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-22 16:59:46,851 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-22 16:59:46,855 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-05-22 16:59:46,865 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-22 16:59:46,871 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-22 16:59:46,877 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-05-22 16:59:46,892 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-22 16:59:46,894 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-22 16:59:46,894 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.001sec 2023-05-22 16:59:46,894 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-22 16:59:46,894 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-22 16:59:46,894 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-22 16:59:46,894 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38619,1684774785821-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-22 16:59:46,894 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38619,1684774785821-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-22 16:59:46,896 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-22 16:59:46,986 DEBUG [Listener at localhost/39181] zookeeper.ReadOnlyZKClient(139): Connect 0x18e9f399 to 127.0.0.1:61632 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-22 16:59:46,994 DEBUG [Listener at localhost/39181] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3812a847, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-22 16:59:46,995 DEBUG [hconnection-0x2b11c20-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-22 16:59:46,999 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50176, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-22 16:59:47,000 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,38619,1684774785821 2023-05-22 16:59:47,000 INFO [Listener at localhost/39181] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 16:59:47,003 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-22 16:59:47,003 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 16:59:47,004 INFO [Listener at localhost/39181] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-22 16:59:47,005 DEBUG [Listener at localhost/39181] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-22 16:59:47,007 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57096, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-22 16:59:47,009 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38619] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-22 16:59:47,009 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38619] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-22 16:59:47,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38619] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-22 16:59:47,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38619] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRolling 2023-05-22 16:59:47,013 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-22 16:59:47,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38619] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRolling" procId is: 9 2023-05-22 16:59:47,014 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-22 16:59:47,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38619] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-22 16:59:47,016 DEBUG [HFileArchiver-10] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/.tmp/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 16:59:47,016 DEBUG [HFileArchiver-10] backup.HFileArchiver(153): Directory hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/.tmp/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8 empty. 2023-05-22 16:59:47,017 DEBUG [HFileArchiver-10] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/.tmp/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 16:59:47,017 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRolling regions 2023-05-22 16:59:47,026 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/.tmp/data/default/TestLogRolling-testLogRolling/.tabledesc/.tableinfo.0000000001 2023-05-22 16:59:47,027 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => c3b3adc06b17bd220cd47cee6fa68fc8, NAME => 'TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/.tmp 2023-05-22 16:59:47,033 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:59:47,033 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1604): Closing c3b3adc06b17bd220cd47cee6fa68fc8, disabling compactions & flushes 2023-05-22 16:59:47,033 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8. 2023-05-22 16:59:47,033 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8. 2023-05-22 16:59:47,033 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8. after waiting 0 ms 2023-05-22 16:59:47,033 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8. 2023-05-22 16:59:47,033 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8. 2023-05-22 16:59:47,033 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for c3b3adc06b17bd220cd47cee6fa68fc8: 2023-05-22 16:59:47,036 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-22 16:59:47,036 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1684774787036"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684774787036"}]},"ts":"1684774787036"} 2023-05-22 16:59:47,038 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-22 16:59:47,039 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-22 16:59:47,039 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684774787039"}]},"ts":"1684774787039"} 2023-05-22 16:59:47,040 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLING in hbase:meta 2023-05-22 16:59:47,044 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c3b3adc06b17bd220cd47cee6fa68fc8, ASSIGN}] 2023-05-22 16:59:47,045 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c3b3adc06b17bd220cd47cee6fa68fc8, ASSIGN 2023-05-22 16:59:47,046 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c3b3adc06b17bd220cd47cee6fa68fc8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,32813,1684774785866; forceNewPlan=false, retain=false 2023-05-22 16:59:47,197 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=c3b3adc06b17bd220cd47cee6fa68fc8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 16:59:47,197 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1684774787197"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1684774787197"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684774787197"}]},"ts":"1684774787197"} 2023-05-22 16:59:47,199 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure c3b3adc06b17bd220cd47cee6fa68fc8, server=jenkins-hbase4.apache.org,32813,1684774785866}] 2023-05-22 16:59:47,355 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8. 2023-05-22 16:59:47,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c3b3adc06b17bd220cd47cee6fa68fc8, NAME => 'TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8.', STARTKEY => '', ENDKEY => ''} 2023-05-22 16:59:47,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 16:59:47,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 16:59:47,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 16:59:47,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 16:59:47,356 INFO [StoreOpener-c3b3adc06b17bd220cd47cee6fa68fc8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 16:59:47,358 DEBUG [StoreOpener-c3b3adc06b17bd220cd47cee6fa68fc8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info 2023-05-22 16:59:47,358 DEBUG [StoreOpener-c3b3adc06b17bd220cd47cee6fa68fc8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info 2023-05-22 16:59:47,358 INFO [StoreOpener-c3b3adc06b17bd220cd47cee6fa68fc8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c3b3adc06b17bd220cd47cee6fa68fc8 columnFamilyName info 2023-05-22 16:59:47,359 INFO [StoreOpener-c3b3adc06b17bd220cd47cee6fa68fc8-1] regionserver.HStore(310): Store=c3b3adc06b17bd220cd47cee6fa68fc8/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 16:59:47,359 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 16:59:47,360 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 16:59:47,362 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 16:59:47,364 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-22 16:59:47,365 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c3b3adc06b17bd220cd47cee6fa68fc8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=826395, jitterRate=0.050816670060157776}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-22 16:59:47,365 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c3b3adc06b17bd220cd47cee6fa68fc8: 2023-05-22 16:59:47,366 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8., pid=11, masterSystemTime=1684774787351 2023-05-22 16:59:47,367 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8. 2023-05-22 16:59:47,367 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8. 2023-05-22 16:59:47,368 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=c3b3adc06b17bd220cd47cee6fa68fc8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 16:59:47,368 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1684774787368"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1684774787368"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684774787368"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684774787368"}]},"ts":"1684774787368"} 2023-05-22 16:59:47,371 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-22 16:59:47,371 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure c3b3adc06b17bd220cd47cee6fa68fc8, server=jenkins-hbase4.apache.org,32813,1684774785866 in 171 msec 2023-05-22 16:59:47,373 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-22 16:59:47,373 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c3b3adc06b17bd220cd47cee6fa68fc8, ASSIGN in 327 msec 2023-05-22 16:59:47,374 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-22 16:59:47,374 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684774787374"}]},"ts":"1684774787374"} 2023-05-22 16:59:47,376 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLED in hbase:meta 2023-05-22 16:59:47,377 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-22 16:59:47,379 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRolling in 368 msec 2023-05-22 16:59:50,005 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-22 16:59:52,116 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-22 16:59:52,117 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-22 16:59:52,117 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRolling' 2023-05-22 16:59:57,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38619] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-22 16:59:57,016 INFO [Listener at localhost/39181] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRolling, procId: 9 completed 2023-05-22 16:59:57,018 DEBUG [Listener at localhost/39181] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRolling 2023-05-22 16:59:57,018 DEBUG [Listener at localhost/39181] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8. 2023-05-22 16:59:57,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] regionserver.HRegion(9158): Flush requested on c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 16:59:57,031 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c3b3adc06b17bd220cd47cee6fa68fc8 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-22 16:59:57,044 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/.tmp/info/312b20fa2c624442a578ddc6567fe2b4 2023-05-22 16:59:57,051 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/.tmp/info/312b20fa2c624442a578ddc6567fe2b4 as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/312b20fa2c624442a578ddc6567fe2b4 2023-05-22 16:59:57,058 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/312b20fa2c624442a578ddc6567fe2b4, entries=7, sequenceid=11, filesize=12.1 K 2023-05-22 16:59:57,059 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=21.02 KB/21520 for c3b3adc06b17bd220cd47cee6fa68fc8 in 28ms, sequenceid=11, compaction requested=false 2023-05-22 16:59:57,060 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c3b3adc06b17bd220cd47cee6fa68fc8: 2023-05-22 16:59:57,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] regionserver.HRegion(9158): Flush requested on c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 16:59:57,061 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c3b3adc06b17bd220cd47cee6fa68fc8 1/1 column families, dataSize=22.07 KB heapSize=23.88 KB 2023-05-22 16:59:57,071 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.07 KB at sequenceid=35 (bloomFilter=true), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/.tmp/info/cf2e88a75287411a857273696f2be3e7 2023-05-22 16:59:57,077 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/.tmp/info/cf2e88a75287411a857273696f2be3e7 as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/cf2e88a75287411a857273696f2be3e7 2023-05-22 16:59:57,081 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/cf2e88a75287411a857273696f2be3e7, entries=21, sequenceid=35, filesize=26.9 K 2023-05-22 16:59:57,082 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~22.07 KB/22596, heapSize ~23.86 KB/24432, currentSize=4.20 KB/4304 for c3b3adc06b17bd220cd47cee6fa68fc8 in 21ms, sequenceid=35, compaction requested=false 2023-05-22 16:59:57,082 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c3b3adc06b17bd220cd47cee6fa68fc8: 2023-05-22 16:59:57,082 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=39.0 K, sizeToCheck=16.0 K 2023-05-22 16:59:57,082 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-22 16:59:57,082 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/cf2e88a75287411a857273696f2be3e7 because midkey is the same as first or last row 2023-05-22 16:59:59,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] regionserver.HRegion(9158): Flush requested on c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 16:59:59,069 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c3b3adc06b17bd220cd47cee6fa68fc8 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-22 16:59:59,088 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=45 (bloomFilter=true), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/.tmp/info/4bbd6286f8f94b3ab3957a5fe85890a2 2023-05-22 16:59:59,095 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/.tmp/info/4bbd6286f8f94b3ab3957a5fe85890a2 as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/4bbd6286f8f94b3ab3957a5fe85890a2 2023-05-22 16:59:59,102 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/4bbd6286f8f94b3ab3957a5fe85890a2, entries=7, sequenceid=45, filesize=12.1 K 2023-05-22 16:59:59,102 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=c3b3adc06b17bd220cd47cee6fa68fc8, server=jenkins-hbase4.apache.org,32813,1684774785866 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-22 16:59:59,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] ipc.CallRunner(144): callId: 66 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:50176 deadline: 1684774809101, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=c3b3adc06b17bd220cd47cee6fa68fc8, server=jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 16:59:59,103 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=23.12 KB/23672 for c3b3adc06b17bd220cd47cee6fa68fc8 in 34ms, sequenceid=45, compaction requested=true 2023-05-22 16:59:59,103 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c3b3adc06b17bd220cd47cee6fa68fc8: 2023-05-22 16:59:59,103 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=51.1 K, sizeToCheck=16.0 K 2023-05-22 16:59:59,103 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-22 16:59:59,103 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/cf2e88a75287411a857273696f2be3e7 because midkey is the same as first or last row 2023-05-22 16:59:59,103 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-22 16:59:59,104 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-22 16:59:59,106 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 52295 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-22 16:59:59,107 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1912): c3b3adc06b17bd220cd47cee6fa68fc8/info is initiating minor compaction (all files) 2023-05-22 16:59:59,107 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of c3b3adc06b17bd220cd47cee6fa68fc8/info in TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8. 2023-05-22 16:59:59,107 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/312b20fa2c624442a578ddc6567fe2b4, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/cf2e88a75287411a857273696f2be3e7, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/4bbd6286f8f94b3ab3957a5fe85890a2] into tmpdir=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/.tmp, totalSize=51.1 K 2023-05-22 16:59:59,107 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting 312b20fa2c624442a578ddc6567fe2b4, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1684774797022 2023-05-22 16:59:59,108 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting cf2e88a75287411a857273696f2be3e7, keycount=21, bloomtype=ROW, size=26.9 K, encoding=NONE, compression=NONE, seqNum=35, earliestPutTs=1684774797031 2023-05-22 16:59:59,109 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting 4bbd6286f8f94b3ab3957a5fe85890a2, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=45, earliestPutTs=1684774797061 2023-05-22 16:59:59,193 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] throttle.PressureAwareThroughputController(145): c3b3adc06b17bd220cd47cee6fa68fc8#info#compaction#28 average throughput is 35.92 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-22 16:59:59,208 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/.tmp/info/6cc06e3b4f0a4d57b842102f54525b34 as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/6cc06e3b4f0a4d57b842102f54525b34 2023-05-22 16:59:59,214 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in c3b3adc06b17bd220cd47cee6fa68fc8/info of c3b3adc06b17bd220cd47cee6fa68fc8 into 6cc06e3b4f0a4d57b842102f54525b34(size=41.7 K), total size for store is 41.7 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-22 16:59:59,214 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for c3b3adc06b17bd220cd47cee6fa68fc8: 2023-05-22 16:59:59,214 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8., storeName=c3b3adc06b17bd220cd47cee6fa68fc8/info, priority=13, startTime=1684774799103; duration=0sec 2023-05-22 16:59:59,215 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=41.7 K, sizeToCheck=16.0 K 2023-05-22 16:59:59,215 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-22 16:59:59,215 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/6cc06e3b4f0a4d57b842102f54525b34 because midkey is the same as first or last row 2023-05-22 16:59:59,215 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-22 17:00:09,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] regionserver.HRegion(9158): Flush requested on c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 17:00:09,154 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c3b3adc06b17bd220cd47cee6fa68fc8 1/1 column families, dataSize=24.17 KB heapSize=26.13 KB 2023-05-22 17:00:09,165 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.17 KB at sequenceid=72 (bloomFilter=true), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/.tmp/info/2a263d5a53294d8c9d0914c4a559b773 2023-05-22 17:00:09,170 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/.tmp/info/2a263d5a53294d8c9d0914c4a559b773 as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/2a263d5a53294d8c9d0914c4a559b773 2023-05-22 17:00:09,175 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/2a263d5a53294d8c9d0914c4a559b773, entries=23, sequenceid=72, filesize=29.0 K 2023-05-22 17:00:09,176 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~24.17 KB/24748, heapSize ~26.11 KB/26736, currentSize=6.30 KB/6456 for c3b3adc06b17bd220cd47cee6fa68fc8 in 22ms, sequenceid=72, compaction requested=false 2023-05-22 17:00:09,176 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c3b3adc06b17bd220cd47cee6fa68fc8: 2023-05-22 17:00:09,176 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=70.7 K, sizeToCheck=16.0 K 2023-05-22 17:00:09,176 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-22 17:00:09,176 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/6cc06e3b4f0a4d57b842102f54525b34 because midkey is the same as first or last row 2023-05-22 17:00:11,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] regionserver.HRegion(9158): Flush requested on c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 17:00:11,163 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c3b3adc06b17bd220cd47cee6fa68fc8 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-22 17:00:11,175 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=82 (bloomFilter=true), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/.tmp/info/50613f8f95bd4affbab0f7a6611f9513 2023-05-22 17:00:11,181 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/.tmp/info/50613f8f95bd4affbab0f7a6611f9513 as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/50613f8f95bd4affbab0f7a6611f9513 2023-05-22 17:00:11,187 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/50613f8f95bd4affbab0f7a6611f9513, entries=7, sequenceid=82, filesize=12.1 K 2023-05-22 17:00:11,188 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=23.12 KB/23672 for c3b3adc06b17bd220cd47cee6fa68fc8 in 25ms, sequenceid=82, compaction requested=true 2023-05-22 17:00:11,188 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c3b3adc06b17bd220cd47cee6fa68fc8: 2023-05-22 17:00:11,188 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=82.8 K, sizeToCheck=16.0 K 2023-05-22 17:00:11,188 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-22 17:00:11,188 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/6cc06e3b4f0a4d57b842102f54525b34 because midkey is the same as first or last row 2023-05-22 17:00:11,188 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-22 17:00:11,188 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-22 17:00:11,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] regionserver.HRegion(9158): Flush requested on c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 17:00:11,189 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c3b3adc06b17bd220cd47cee6fa68fc8 1/1 column families, dataSize=25.22 KB heapSize=27.25 KB 2023-05-22 17:00:11,189 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 84764 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-22 17:00:11,189 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1912): c3b3adc06b17bd220cd47cee6fa68fc8/info is initiating minor compaction (all files) 2023-05-22 17:00:11,190 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of c3b3adc06b17bd220cd47cee6fa68fc8/info in TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8. 2023-05-22 17:00:11,190 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/6cc06e3b4f0a4d57b842102f54525b34, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/2a263d5a53294d8c9d0914c4a559b773, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/50613f8f95bd4affbab0f7a6611f9513] into tmpdir=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/.tmp, totalSize=82.8 K 2023-05-22 17:00:11,190 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting 6cc06e3b4f0a4d57b842102f54525b34, keycount=35, bloomtype=ROW, size=41.7 K, encoding=NONE, compression=NONE, seqNum=45, earliestPutTs=1684774797022 2023-05-22 17:00:11,191 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting 2a263d5a53294d8c9d0914c4a559b773, keycount=23, bloomtype=ROW, size=29.0 K, encoding=NONE, compression=NONE, seqNum=72, earliestPutTs=1684774799070 2023-05-22 17:00:11,191 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting 50613f8f95bd4affbab0f7a6611f9513, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=82, earliestPutTs=1684774809155 2023-05-22 17:00:11,196 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=c3b3adc06b17bd220cd47cee6fa68fc8, server=jenkins-hbase4.apache.org,32813,1684774785866 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-22 17:00:11,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] ipc.CallRunner(144): callId: 105 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:50176 deadline: 1684774821195, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=c3b3adc06b17bd220cd47cee6fa68fc8, server=jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 17:00:11,203 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=25.22 KB at sequenceid=109 (bloomFilter=true), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/.tmp/info/8ea728247cd340e09fdf51a3c3ccb974 2023-05-22 17:00:11,207 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] throttle.PressureAwareThroughputController(145): c3b3adc06b17bd220cd47cee6fa68fc8#info#compaction#32 average throughput is 22.23 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-22 17:00:11,208 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/.tmp/info/8ea728247cd340e09fdf51a3c3ccb974 as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/8ea728247cd340e09fdf51a3c3ccb974 2023-05-22 17:00:11,218 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/8ea728247cd340e09fdf51a3c3ccb974, entries=24, sequenceid=109, filesize=30.0 K 2023-05-22 17:00:11,219 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~25.22 KB/25824, heapSize ~27.23 KB/27888, currentSize=5.25 KB/5380 for c3b3adc06b17bd220cd47cee6fa68fc8 in 30ms, sequenceid=109, compaction requested=false 2023-05-22 17:00:11,219 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c3b3adc06b17bd220cd47cee6fa68fc8: 2023-05-22 17:00:11,219 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=112.8 K, sizeToCheck=16.0 K 2023-05-22 17:00:11,219 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-22 17:00:11,219 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/6cc06e3b4f0a4d57b842102f54525b34 because midkey is the same as first or last row 2023-05-22 17:00:11,222 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/.tmp/info/6d6fae9bfbd64708b7dc6f2e4e9c351f as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/6d6fae9bfbd64708b7dc6f2e4e9c351f 2023-05-22 17:00:11,227 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in c3b3adc06b17bd220cd47cee6fa68fc8/info of c3b3adc06b17bd220cd47cee6fa68fc8 into 6d6fae9bfbd64708b7dc6f2e4e9c351f(size=73.5 K), total size for store is 103.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-22 17:00:11,227 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for c3b3adc06b17bd220cd47cee6fa68fc8: 2023-05-22 17:00:11,227 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8., storeName=c3b3adc06b17bd220cd47cee6fa68fc8/info, priority=13, startTime=1684774811188; duration=0sec 2023-05-22 17:00:11,228 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=103.5 K, sizeToCheck=16.0 K 2023-05-22 17:00:11,228 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-22 17:00:11,228 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.CompactSplit(227): Splitting TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8., compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-22 17:00:11,228 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-22 17:00:11,229 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38619] assignment.AssignmentManager(1140): Split request from jenkins-hbase4.apache.org,32813,1684774785866, parent={ENCODED => c3b3adc06b17bd220cd47cee6fa68fc8, NAME => 'TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8.', STARTKEY => '', ENDKEY => ''} splitKey=row0062 2023-05-22 17:00:11,236 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38619] assignment.SplitTableRegionProcedure(219): Splittable=true state=OPEN, location=jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 17:00:11,242 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38619] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c3b3adc06b17bd220cd47cee6fa68fc8, daughterA=23fc1f6c9fd87ac6b51f5db09837184b, daughterB=b660f62c783cc997c23507e194da314b 2023-05-22 17:00:11,243 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c3b3adc06b17bd220cd47cee6fa68fc8, daughterA=23fc1f6c9fd87ac6b51f5db09837184b, daughterB=b660f62c783cc997c23507e194da314b 2023-05-22 17:00:11,243 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c3b3adc06b17bd220cd47cee6fa68fc8, daughterA=23fc1f6c9fd87ac6b51f5db09837184b, daughterB=b660f62c783cc997c23507e194da314b 2023-05-22 17:00:11,243 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c3b3adc06b17bd220cd47cee6fa68fc8, daughterA=23fc1f6c9fd87ac6b51f5db09837184b, daughterB=b660f62c783cc997c23507e194da314b 2023-05-22 17:00:11,251 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c3b3adc06b17bd220cd47cee6fa68fc8, UNASSIGN}] 2023-05-22 17:00:11,252 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c3b3adc06b17bd220cd47cee6fa68fc8, UNASSIGN 2023-05-22 17:00:11,253 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=c3b3adc06b17bd220cd47cee6fa68fc8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 17:00:11,253 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1684774811253"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1684774811253"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684774811253"}]},"ts":"1684774811253"} 2023-05-22 17:00:11,254 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; CloseRegionProcedure c3b3adc06b17bd220cd47cee6fa68fc8, server=jenkins-hbase4.apache.org,32813,1684774785866}] 2023-05-22 17:00:11,412 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 17:00:11,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c3b3adc06b17bd220cd47cee6fa68fc8, disabling compactions & flushes 2023-05-22 17:00:11,412 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8. 2023-05-22 17:00:11,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8. 2023-05-22 17:00:11,413 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8. after waiting 0 ms 2023-05-22 17:00:11,413 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8. 2023-05-22 17:00:11,413 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing c3b3adc06b17bd220cd47cee6fa68fc8 1/1 column families, dataSize=5.25 KB heapSize=5.88 KB 2023-05-22 17:00:11,421 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.25 KB at sequenceid=118 (bloomFilter=true), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/.tmp/info/228b6c0ae7e24eaf930de7bc34fd3ab6 2023-05-22 17:00:11,426 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/.tmp/info/228b6c0ae7e24eaf930de7bc34fd3ab6 as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/228b6c0ae7e24eaf930de7bc34fd3ab6 2023-05-22 17:00:11,431 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/228b6c0ae7e24eaf930de7bc34fd3ab6, entries=5, sequenceid=118, filesize=10.0 K 2023-05-22 17:00:11,432 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.25 KB/5380, heapSize ~5.86 KB/6000, currentSize=0 B/0 for c3b3adc06b17bd220cd47cee6fa68fc8 in 19ms, sequenceid=118, compaction requested=true 2023-05-22 17:00:11,438 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/312b20fa2c624442a578ddc6567fe2b4, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/cf2e88a75287411a857273696f2be3e7, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/6cc06e3b4f0a4d57b842102f54525b34, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/4bbd6286f8f94b3ab3957a5fe85890a2, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/2a263d5a53294d8c9d0914c4a559b773, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/50613f8f95bd4affbab0f7a6611f9513] to archive 2023-05-22 17:00:11,438 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-22 17:00:11,440 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/312b20fa2c624442a578ddc6567fe2b4 to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/312b20fa2c624442a578ddc6567fe2b4 2023-05-22 17:00:11,441 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/cf2e88a75287411a857273696f2be3e7 to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/cf2e88a75287411a857273696f2be3e7 2023-05-22 17:00:11,443 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/6cc06e3b4f0a4d57b842102f54525b34 to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/6cc06e3b4f0a4d57b842102f54525b34 2023-05-22 17:00:11,444 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/4bbd6286f8f94b3ab3957a5fe85890a2 to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/4bbd6286f8f94b3ab3957a5fe85890a2 2023-05-22 17:00:11,445 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/2a263d5a53294d8c9d0914c4a559b773 to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/2a263d5a53294d8c9d0914c4a559b773 2023-05-22 17:00:11,446 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/50613f8f95bd4affbab0f7a6611f9513 to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/50613f8f95bd4affbab0f7a6611f9513 2023-05-22 17:00:11,452 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/recovered.edits/121.seqid, newMaxSeqId=121, maxSeqId=1 2023-05-22 17:00:11,453 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8. 2023-05-22 17:00:11,453 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c3b3adc06b17bd220cd47cee6fa68fc8: 2023-05-22 17:00:11,454 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 17:00:11,455 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=c3b3adc06b17bd220cd47cee6fa68fc8, regionState=CLOSED 2023-05-22 17:00:11,455 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1684774811455"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684774811455"}]},"ts":"1684774811455"} 2023-05-22 17:00:11,458 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-05-22 17:00:11,458 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; CloseRegionProcedure c3b3adc06b17bd220cd47cee6fa68fc8, server=jenkins-hbase4.apache.org,32813,1684774785866 in 202 msec 2023-05-22 17:00:11,460 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-05-22 17:00:11,460 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c3b3adc06b17bd220cd47cee6fa68fc8, UNASSIGN in 207 msec 2023-05-22 17:00:11,472 INFO [PEWorker-4] assignment.SplitTableRegionProcedure(694): pid=12 splitting 3 storefiles, region=c3b3adc06b17bd220cd47cee6fa68fc8, threads=3 2023-05-22 17:00:11,473 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/228b6c0ae7e24eaf930de7bc34fd3ab6 for region: c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 17:00:11,473 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/6d6fae9bfbd64708b7dc6f2e4e9c351f for region: c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 17:00:11,473 DEBUG [StoreFileSplitter-pool-2] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/8ea728247cd340e09fdf51a3c3ccb974 for region: c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 17:00:11,482 DEBUG [StoreFileSplitter-pool-2] regionserver.HRegionFileSystem(700): Will create HFileLink file for hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/8ea728247cd340e09fdf51a3c3ccb974, top=true 2023-05-22 17:00:11,482 DEBUG [StoreFileSplitter-pool-0] regionserver.HRegionFileSystem(700): Will create HFileLink file for hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/228b6c0ae7e24eaf930de7bc34fd3ab6, top=true 2023-05-22 17:00:11,486 INFO [StoreFileSplitter-pool-0] regionserver.HRegionFileSystem(742): Created linkFile:hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/.splits/b660f62c783cc997c23507e194da314b/info/TestLogRolling-testLogRolling=c3b3adc06b17bd220cd47cee6fa68fc8-228b6c0ae7e24eaf930de7bc34fd3ab6 for child: b660f62c783cc997c23507e194da314b, parent: c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 17:00:11,486 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/228b6c0ae7e24eaf930de7bc34fd3ab6 for region: c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 17:00:11,486 INFO [StoreFileSplitter-pool-2] regionserver.HRegionFileSystem(742): Created linkFile:hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/.splits/b660f62c783cc997c23507e194da314b/info/TestLogRolling-testLogRolling=c3b3adc06b17bd220cd47cee6fa68fc8-8ea728247cd340e09fdf51a3c3ccb974 for child: b660f62c783cc997c23507e194da314b, parent: c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 17:00:11,486 DEBUG [StoreFileSplitter-pool-2] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/8ea728247cd340e09fdf51a3c3ccb974 for region: c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 17:00:11,500 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/6d6fae9bfbd64708b7dc6f2e4e9c351f for region: c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 17:00:11,501 DEBUG [PEWorker-4] assignment.SplitTableRegionProcedure(755): pid=12 split storefiles for region c3b3adc06b17bd220cd47cee6fa68fc8 Daughter A: 1 storefiles, Daughter B: 3 storefiles. 2023-05-22 17:00:11,531 DEBUG [PEWorker-4] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/23fc1f6c9fd87ac6b51f5db09837184b/recovered.edits/121.seqid, newMaxSeqId=121, maxSeqId=-1 2023-05-22 17:00:11,533 DEBUG [PEWorker-4] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/recovered.edits/121.seqid, newMaxSeqId=121, maxSeqId=-1 2023-05-22 17:00:11,535 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1684774811535"},{"qualifier":"splitA","vlen":70,"tag":[],"timestamp":"1684774811535"},{"qualifier":"splitB","vlen":70,"tag":[],"timestamp":"1684774811535"}]},"ts":"1684774811535"} 2023-05-22 17:00:11,535 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1684774811236.23fc1f6c9fd87ac6b51f5db09837184b.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1684774811535"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684774811535"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684774811535"}]},"ts":"1684774811535"} 2023-05-22 17:00:11,535 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1684774811535"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684774811535"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684774811535"}]},"ts":"1684774811535"} 2023-05-22 17:00:11,574 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=32813] regionserver.HRegion(9158): Flush requested on 1588230740 2023-05-22 17:00:11,575 DEBUG [MemStoreFlusher.0] regionserver.FlushAllLargeStoresPolicy(69): Since none of the CFs were above the size, flushing all. 2023-05-22 17:00:11,575 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.82 KB heapSize=8.36 KB 2023-05-22 17:00:11,583 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=23fc1f6c9fd87ac6b51f5db09837184b, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=b660f62c783cc997c23507e194da314b, ASSIGN}] 2023-05-22 17:00:11,584 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=b660f62c783cc997c23507e194da314b, ASSIGN 2023-05-22 17:00:11,584 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=23fc1f6c9fd87ac6b51f5db09837184b, ASSIGN 2023-05-22 17:00:11,584 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.61 KB at sequenceid=17 (bloomFilter=false), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740/.tmp/info/e19903c9b41540229d674d392f7becb3 2023-05-22 17:00:11,585 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=b660f62c783cc997c23507e194da314b, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase4.apache.org,32813,1684774785866; forceNewPlan=false, retain=false 2023-05-22 17:00:11,588 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=23fc1f6c9fd87ac6b51f5db09837184b, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase4.apache.org,32813,1684774785866; forceNewPlan=false, retain=false 2023-05-22 17:00:11,598 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=216 B at sequenceid=17 (bloomFilter=false), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740/.tmp/table/4c871717168445e080bc0b6d985ca68d 2023-05-22 17:00:11,603 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740/.tmp/info/e19903c9b41540229d674d392f7becb3 as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740/info/e19903c9b41540229d674d392f7becb3 2023-05-22 17:00:11,608 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740/info/e19903c9b41540229d674d392f7becb3, entries=29, sequenceid=17, filesize=8.6 K 2023-05-22 17:00:11,608 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740/.tmp/table/4c871717168445e080bc0b6d985ca68d as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740/table/4c871717168445e080bc0b6d985ca68d 2023-05-22 17:00:11,613 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740/table/4c871717168445e080bc0b6d985ca68d, entries=4, sequenceid=17, filesize=4.8 K 2023-05-22 17:00:11,614 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~4.82 KB/4934, heapSize ~8.08 KB/8272, currentSize=0 B/0 for 1588230740 in 38ms, sequenceid=17, compaction requested=false 2023-05-22 17:00:11,614 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-22 17:00:11,737 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=23fc1f6c9fd87ac6b51f5db09837184b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 17:00:11,737 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=b660f62c783cc997c23507e194da314b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 17:00:11,737 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1684774811236.23fc1f6c9fd87ac6b51f5db09837184b.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1684774811736"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1684774811736"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684774811736"}]},"ts":"1684774811736"} 2023-05-22 17:00:11,737 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1684774811736"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1684774811736"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684774811736"}]},"ts":"1684774811736"} 2023-05-22 17:00:11,739 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE; OpenRegionProcedure 23fc1f6c9fd87ac6b51f5db09837184b, server=jenkins-hbase4.apache.org,32813,1684774785866}] 2023-05-22 17:00:11,739 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=16, state=RUNNABLE; OpenRegionProcedure b660f62c783cc997c23507e194da314b, server=jenkins-hbase4.apache.org,32813,1684774785866}] 2023-05-22 17:00:11,894 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1684774811236.23fc1f6c9fd87ac6b51f5db09837184b. 2023-05-22 17:00:11,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 23fc1f6c9fd87ac6b51f5db09837184b, NAME => 'TestLogRolling-testLogRolling,,1684774811236.23fc1f6c9fd87ac6b51f5db09837184b.', STARTKEY => '', ENDKEY => 'row0062'} 2023-05-22 17:00:11,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 23fc1f6c9fd87ac6b51f5db09837184b 2023-05-22 17:00:11,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1684774811236.23fc1f6c9fd87ac6b51f5db09837184b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 17:00:11,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 23fc1f6c9fd87ac6b51f5db09837184b 2023-05-22 17:00:11,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 23fc1f6c9fd87ac6b51f5db09837184b 2023-05-22 17:00:11,895 INFO [StoreOpener-23fc1f6c9fd87ac6b51f5db09837184b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 23fc1f6c9fd87ac6b51f5db09837184b 2023-05-22 17:00:11,896 DEBUG [StoreOpener-23fc1f6c9fd87ac6b51f5db09837184b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/23fc1f6c9fd87ac6b51f5db09837184b/info 2023-05-22 17:00:11,896 DEBUG [StoreOpener-23fc1f6c9fd87ac6b51f5db09837184b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/23fc1f6c9fd87ac6b51f5db09837184b/info 2023-05-22 17:00:11,897 INFO [StoreOpener-23fc1f6c9fd87ac6b51f5db09837184b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 23fc1f6c9fd87ac6b51f5db09837184b columnFamilyName info 2023-05-22 17:00:11,909 DEBUG [StoreOpener-23fc1f6c9fd87ac6b51f5db09837184b-1] regionserver.HStore(539): loaded hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/23fc1f6c9fd87ac6b51f5db09837184b/info/6d6fae9bfbd64708b7dc6f2e4e9c351f.c3b3adc06b17bd220cd47cee6fa68fc8->hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/6d6fae9bfbd64708b7dc6f2e4e9c351f-bottom 2023-05-22 17:00:11,909 INFO [StoreOpener-23fc1f6c9fd87ac6b51f5db09837184b-1] regionserver.HStore(310): Store=23fc1f6c9fd87ac6b51f5db09837184b/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 17:00:11,910 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/23fc1f6c9fd87ac6b51f5db09837184b 2023-05-22 17:00:11,911 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/23fc1f6c9fd87ac6b51f5db09837184b 2023-05-22 17:00:11,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 23fc1f6c9fd87ac6b51f5db09837184b 2023-05-22 17:00:11,915 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 23fc1f6c9fd87ac6b51f5db09837184b; next sequenceid=122; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=823836, jitterRate=0.04756191372871399}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-22 17:00:11,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 23fc1f6c9fd87ac6b51f5db09837184b: 2023-05-22 17:00:11,916 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1684774811236.23fc1f6c9fd87ac6b51f5db09837184b., pid=17, masterSystemTime=1684774811890 2023-05-22 17:00:11,916 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-22 17:00:11,917 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 1 store files, 0 compacting, 1 eligible, 16 blocking 2023-05-22 17:00:11,917 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,,1684774811236.23fc1f6c9fd87ac6b51f5db09837184b. 2023-05-22 17:00:11,917 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1912): 23fc1f6c9fd87ac6b51f5db09837184b/info is initiating minor compaction (all files) 2023-05-22 17:00:11,917 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 23fc1f6c9fd87ac6b51f5db09837184b/info in TestLogRolling-testLogRolling,,1684774811236.23fc1f6c9fd87ac6b51f5db09837184b. 2023-05-22 17:00:11,918 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/23fc1f6c9fd87ac6b51f5db09837184b/info/6d6fae9bfbd64708b7dc6f2e4e9c351f.c3b3adc06b17bd220cd47cee6fa68fc8->hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/6d6fae9bfbd64708b7dc6f2e4e9c351f-bottom] into tmpdir=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/23fc1f6c9fd87ac6b51f5db09837184b/.tmp, totalSize=73.5 K 2023-05-22 17:00:11,918 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting 6d6fae9bfbd64708b7dc6f2e4e9c351f.c3b3adc06b17bd220cd47cee6fa68fc8, keycount=32, bloomtype=ROW, size=73.5 K, encoding=NONE, compression=NONE, seqNum=82, earliestPutTs=1684774797022 2023-05-22 17:00:11,918 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1684774811236.23fc1f6c9fd87ac6b51f5db09837184b. 2023-05-22 17:00:11,918 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1684774811236.23fc1f6c9fd87ac6b51f5db09837184b. 2023-05-22 17:00:11,919 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b. 2023-05-22 17:00:11,919 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b660f62c783cc997c23507e194da314b, NAME => 'TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.', STARTKEY => 'row0062', ENDKEY => ''} 2023-05-22 17:00:11,919 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling b660f62c783cc997c23507e194da314b 2023-05-22 17:00:11,919 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=23fc1f6c9fd87ac6b51f5db09837184b, regionState=OPEN, openSeqNum=122, regionLocation=jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 17:00:11,919 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 17:00:11,919 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b660f62c783cc997c23507e194da314b 2023-05-22 17:00:11,919 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b660f62c783cc997c23507e194da314b 2023-05-22 17:00:11,919 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1684774811236.23fc1f6c9fd87ac6b51f5db09837184b.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1684774811919"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1684774811919"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684774811919"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684774811919"}]},"ts":"1684774811919"} 2023-05-22 17:00:11,921 INFO [StoreOpener-b660f62c783cc997c23507e194da314b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region b660f62c783cc997c23507e194da314b 2023-05-22 17:00:11,922 DEBUG [StoreOpener-b660f62c783cc997c23507e194da314b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info 2023-05-22 17:00:11,922 DEBUG [StoreOpener-b660f62c783cc997c23507e194da314b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info 2023-05-22 17:00:11,922 INFO [StoreOpener-b660f62c783cc997c23507e194da314b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b660f62c783cc997c23507e194da314b columnFamilyName info 2023-05-22 17:00:11,924 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-05-22 17:00:11,924 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; OpenRegionProcedure 23fc1f6c9fd87ac6b51f5db09837184b, server=jenkins-hbase4.apache.org,32813,1684774785866 in 183 msec 2023-05-22 17:00:11,926 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=23fc1f6c9fd87ac6b51f5db09837184b, ASSIGN in 341 msec 2023-05-22 17:00:11,927 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] throttle.PressureAwareThroughputController(145): 23fc1f6c9fd87ac6b51f5db09837184b#info#compaction#36 average throughput is 15.65 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-22 17:00:11,936 DEBUG [StoreOpener-b660f62c783cc997c23507e194da314b-1] regionserver.HStore(539): loaded hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/6d6fae9bfbd64708b7dc6f2e4e9c351f.c3b3adc06b17bd220cd47cee6fa68fc8->hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/6d6fae9bfbd64708b7dc6f2e4e9c351f-top 2023-05-22 17:00:11,941 DEBUG [StoreOpener-b660f62c783cc997c23507e194da314b-1] regionserver.HStore(539): loaded hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/TestLogRolling-testLogRolling=c3b3adc06b17bd220cd47cee6fa68fc8-228b6c0ae7e24eaf930de7bc34fd3ab6 2023-05-22 17:00:11,946 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/23fc1f6c9fd87ac6b51f5db09837184b/.tmp/info/775d5c2ffbf040938d8e65567595be0c as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/23fc1f6c9fd87ac6b51f5db09837184b/info/775d5c2ffbf040938d8e65567595be0c 2023-05-22 17:00:11,950 DEBUG [StoreOpener-b660f62c783cc997c23507e194da314b-1] regionserver.HStore(539): loaded hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/TestLogRolling-testLogRolling=c3b3adc06b17bd220cd47cee6fa68fc8-8ea728247cd340e09fdf51a3c3ccb974 2023-05-22 17:00:11,950 INFO [StoreOpener-b660f62c783cc997c23507e194da314b-1] regionserver.HStore(310): Store=b660f62c783cc997c23507e194da314b/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 17:00:11,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b 2023-05-22 17:00:11,952 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b 2023-05-22 17:00:11,952 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 1 (all) file(s) in 23fc1f6c9fd87ac6b51f5db09837184b/info of 23fc1f6c9fd87ac6b51f5db09837184b into 775d5c2ffbf040938d8e65567595be0c(size=69.1 K), total size for store is 69.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-22 17:00:11,952 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 23fc1f6c9fd87ac6b51f5db09837184b: 2023-05-22 17:00:11,952 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1684774811236.23fc1f6c9fd87ac6b51f5db09837184b., storeName=23fc1f6c9fd87ac6b51f5db09837184b/info, priority=15, startTime=1684774811916; duration=0sec 2023-05-22 17:00:11,952 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-22 17:00:11,955 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b660f62c783cc997c23507e194da314b 2023-05-22 17:00:11,956 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b660f62c783cc997c23507e194da314b; next sequenceid=122; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=872245, jitterRate=0.10911758244037628}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-22 17:00:11,956 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b660f62c783cc997c23507e194da314b: 2023-05-22 17:00:11,956 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b., pid=18, masterSystemTime=1684774811890 2023-05-22 17:00:11,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-22 17:00:11,960 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-22 17:00:11,961 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b. 2023-05-22 17:00:11,961 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1912): b660f62c783cc997c23507e194da314b/info is initiating minor compaction (all files) 2023-05-22 17:00:11,961 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of b660f62c783cc997c23507e194da314b/info in TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b. 2023-05-22 17:00:11,961 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b. 2023-05-22 17:00:11,961 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/6d6fae9bfbd64708b7dc6f2e4e9c351f.c3b3adc06b17bd220cd47cee6fa68fc8->hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/6d6fae9bfbd64708b7dc6f2e4e9c351f-top, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/TestLogRolling-testLogRolling=c3b3adc06b17bd220cd47cee6fa68fc8-8ea728247cd340e09fdf51a3c3ccb974, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/TestLogRolling-testLogRolling=c3b3adc06b17bd220cd47cee6fa68fc8-228b6c0ae7e24eaf930de7bc34fd3ab6] into tmpdir=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp, totalSize=113.5 K 2023-05-22 17:00:11,961 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b. 2023-05-22 17:00:11,962 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting 6d6fae9bfbd64708b7dc6f2e4e9c351f.c3b3adc06b17bd220cd47cee6fa68fc8, keycount=32, bloomtype=ROW, size=73.5 K, encoding=NONE, compression=NONE, seqNum=83, earliestPutTs=1684774797022 2023-05-22 17:00:11,962 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=b660f62c783cc997c23507e194da314b, regionState=OPEN, openSeqNum=122, regionLocation=jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 17:00:11,962 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1684774811962"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1684774811962"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684774811962"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684774811962"}]},"ts":"1684774811962"} 2023-05-22 17:00:11,962 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting TestLogRolling-testLogRolling=c3b3adc06b17bd220cd47cee6fa68fc8-8ea728247cd340e09fdf51a3c3ccb974, keycount=24, bloomtype=ROW, size=30.0 K, encoding=NONE, compression=NONE, seqNum=109, earliestPutTs=1684774811164 2023-05-22 17:00:11,963 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting TestLogRolling-testLogRolling=c3b3adc06b17bd220cd47cee6fa68fc8-228b6c0ae7e24eaf930de7bc34fd3ab6, keycount=5, bloomtype=ROW, size=10.0 K, encoding=NONE, compression=NONE, seqNum=118, earliestPutTs=1684774811189 2023-05-22 17:00:11,966 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=16 2023-05-22 17:00:11,966 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=16, state=SUCCESS; OpenRegionProcedure b660f62c783cc997c23507e194da314b, server=jenkins-hbase4.apache.org,32813,1684774785866 in 225 msec 2023-05-22 17:00:11,968 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=12 2023-05-22 17:00:11,968 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=b660f62c783cc997c23507e194da314b, ASSIGN in 383 msec 2023-05-22 17:00:11,970 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c3b3adc06b17bd220cd47cee6fa68fc8, daughterA=23fc1f6c9fd87ac6b51f5db09837184b, daughterB=b660f62c783cc997c23507e194da314b in 732 msec 2023-05-22 17:00:11,973 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] throttle.PressureAwareThroughputController(145): b660f62c783cc997c23507e194da314b#info#compaction#37 average throughput is 33.86 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-22 17:00:11,984 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/d24780597e1a422ca7d2da9268959fee as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/d24780597e1a422ca7d2da9268959fee 2023-05-22 17:00:11,989 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in b660f62c783cc997c23507e194da314b/info of b660f62c783cc997c23507e194da314b into d24780597e1a422ca7d2da9268959fee(size=39.8 K), total size for store is 39.8 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-22 17:00:11,990 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for b660f62c783cc997c23507e194da314b: 2023-05-22 17:00:11,990 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b., storeName=b660f62c783cc997c23507e194da314b/info, priority=13, startTime=1684774811956; duration=0sec 2023-05-22 17:00:11,990 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-22 17:00:17,006 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-22 17:00:21,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] ipc.CallRunner(144): callId: 107 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:50176 deadline: 1684774831257, exception=org.apache.hadoop.hbase.NotServingRegionException: TestLogRolling-testLogRolling,,1684774787009.c3b3adc06b17bd220cd47cee6fa68fc8. is not online on jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 17:00:31,694 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): data stats (chunk size=2097152): current pool size=3, created chunk count=13, reused chunk count=29, reuseRatio=69.05% 2023-05-22 17:00:31,694 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2023-05-22 17:00:38,718 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-22 17:00:43,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] regionserver.HRegion(9158): Flush requested on b660f62c783cc997c23507e194da314b 2023-05-22 17:00:43,356 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing b660f62c783cc997c23507e194da314b 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-22 17:00:43,377 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=132 (bloomFilter=true), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/6210121999444a33a522d1909255c1e5 2023-05-22 17:00:43,383 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/6210121999444a33a522d1909255c1e5 as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/6210121999444a33a522d1909255c1e5 2023-05-22 17:00:43,388 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/6210121999444a33a522d1909255c1e5, entries=7, sequenceid=132, filesize=12.1 K 2023-05-22 17:00:43,389 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=23.12 KB/23672 for b660f62c783cc997c23507e194da314b in 33ms, sequenceid=132, compaction requested=false 2023-05-22 17:00:43,389 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for b660f62c783cc997c23507e194da314b: 2023-05-22 17:00:43,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] regionserver.HRegion(9158): Flush requested on b660f62c783cc997c23507e194da314b 2023-05-22 17:00:43,390 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing b660f62c783cc997c23507e194da314b 1/1 column families, dataSize=25.22 KB heapSize=27.25 KB 2023-05-22 17:00:43,401 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=25.22 KB at sequenceid=159 (bloomFilter=true), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/ffec83ebef7d49b8b4195b183fc47b87 2023-05-22 17:00:43,406 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/ffec83ebef7d49b8b4195b183fc47b87 as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/ffec83ebef7d49b8b4195b183fc47b87 2023-05-22 17:00:43,412 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/ffec83ebef7d49b8b4195b183fc47b87, entries=24, sequenceid=159, filesize=30.0 K 2023-05-22 17:00:43,413 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~25.22 KB/25824, heapSize ~27.23 KB/27888, currentSize=3.15 KB/3228 for b660f62c783cc997c23507e194da314b in 22ms, sequenceid=159, compaction requested=true 2023-05-22 17:00:43,413 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for b660f62c783cc997c23507e194da314b: 2023-05-22 17:00:43,413 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-22 17:00:43,413 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-22 17:00:43,414 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 83875 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-22 17:00:43,414 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1912): b660f62c783cc997c23507e194da314b/info is initiating minor compaction (all files) 2023-05-22 17:00:43,414 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of b660f62c783cc997c23507e194da314b/info in TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b. 2023-05-22 17:00:43,414 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/d24780597e1a422ca7d2da9268959fee, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/6210121999444a33a522d1909255c1e5, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/ffec83ebef7d49b8b4195b183fc47b87] into tmpdir=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp, totalSize=81.9 K 2023-05-22 17:00:43,415 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting d24780597e1a422ca7d2da9268959fee, keycount=33, bloomtype=ROW, size=39.8 K, encoding=NONE, compression=NONE, seqNum=118, earliestPutTs=1684774809158 2023-05-22 17:00:43,415 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting 6210121999444a33a522d1909255c1e5, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=132, earliestPutTs=1684774841348 2023-05-22 17:00:43,416 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting ffec83ebef7d49b8b4195b183fc47b87, keycount=24, bloomtype=ROW, size=30.0 K, encoding=NONE, compression=NONE, seqNum=159, earliestPutTs=1684774843357 2023-05-22 17:00:43,428 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] throttle.PressureAwareThroughputController(145): b660f62c783cc997c23507e194da314b#info#compaction#40 average throughput is 32.84 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-22 17:00:43,442 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/b0e622f85ee54d49aa86c69814ad15eb as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/b0e622f85ee54d49aa86c69814ad15eb 2023-05-22 17:00:43,448 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in b660f62c783cc997c23507e194da314b/info of b660f62c783cc997c23507e194da314b into b0e622f85ee54d49aa86c69814ad15eb(size=72.6 K), total size for store is 72.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-22 17:00:43,448 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for b660f62c783cc997c23507e194da314b: 2023-05-22 17:00:43,448 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b., storeName=b660f62c783cc997c23507e194da314b/info, priority=13, startTime=1684774843413; duration=0sec 2023-05-22 17:00:43,448 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-22 17:00:45,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] regionserver.HRegion(9158): Flush requested on b660f62c783cc997c23507e194da314b 2023-05-22 17:00:45,399 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing b660f62c783cc997c23507e194da314b 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-22 17:00:45,407 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=170 (bloomFilter=true), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/b970a22471964634b3c58c36f6fee34b 2023-05-22 17:00:45,413 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/b970a22471964634b3c58c36f6fee34b as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/b970a22471964634b3c58c36f6fee34b 2023-05-22 17:00:45,418 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/b970a22471964634b3c58c36f6fee34b, entries=7, sequenceid=170, filesize=12.1 K 2023-05-22 17:00:45,419 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for b660f62c783cc997c23507e194da314b in 20ms, sequenceid=170, compaction requested=false 2023-05-22 17:00:45,419 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for b660f62c783cc997c23507e194da314b: 2023-05-22 17:00:45,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] regionserver.HRegion(9158): Flush requested on b660f62c783cc997c23507e194da314b 2023-05-22 17:00:45,420 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing b660f62c783cc997c23507e194da314b 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-05-22 17:00:45,429 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=193 (bloomFilter=true), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/68a03cbd775549c79944ab54c2e0c12a 2023-05-22 17:00:45,435 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/68a03cbd775549c79944ab54c2e0c12a as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/68a03cbd775549c79944ab54c2e0c12a 2023-05-22 17:00:45,441 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/68a03cbd775549c79944ab54c2e0c12a, entries=20, sequenceid=193, filesize=25.8 K 2023-05-22 17:00:45,442 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=8.41 KB/8608 for b660f62c783cc997c23507e194da314b in 22ms, sequenceid=193, compaction requested=true 2023-05-22 17:00:45,442 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for b660f62c783cc997c23507e194da314b: 2023-05-22 17:00:45,443 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-22 17:00:45,443 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-22 17:00:45,444 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 113224 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-22 17:00:45,444 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1912): b660f62c783cc997c23507e194da314b/info is initiating minor compaction (all files) 2023-05-22 17:00:45,444 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of b660f62c783cc997c23507e194da314b/info in TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b. 2023-05-22 17:00:45,444 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/b0e622f85ee54d49aa86c69814ad15eb, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/b970a22471964634b3c58c36f6fee34b, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/68a03cbd775549c79944ab54c2e0c12a] into tmpdir=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp, totalSize=110.6 K 2023-05-22 17:00:45,444 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting b0e622f85ee54d49aa86c69814ad15eb, keycount=64, bloomtype=ROW, size=72.6 K, encoding=NONE, compression=NONE, seqNum=159, earliestPutTs=1684774809158 2023-05-22 17:00:45,445 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting b970a22471964634b3c58c36f6fee34b, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=170, earliestPutTs=1684774843391 2023-05-22 17:00:45,445 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting 68a03cbd775549c79944ab54c2e0c12a, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=193, earliestPutTs=1684774845399 2023-05-22 17:00:45,456 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] throttle.PressureAwareThroughputController(145): b660f62c783cc997c23507e194da314b#info#compaction#43 average throughput is 46.69 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-22 17:00:45,470 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/28a10f6fa25a499395a54ca1c31ee539 as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/28a10f6fa25a499395a54ca1c31ee539 2023-05-22 17:00:45,476 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in b660f62c783cc997c23507e194da314b/info of b660f62c783cc997c23507e194da314b into 28a10f6fa25a499395a54ca1c31ee539(size=101.2 K), total size for store is 101.2 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-22 17:00:45,476 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for b660f62c783cc997c23507e194da314b: 2023-05-22 17:00:45,476 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b., storeName=b660f62c783cc997c23507e194da314b/info, priority=13, startTime=1684774845442; duration=0sec 2023-05-22 17:00:45,476 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-22 17:00:47,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] regionserver.HRegion(9158): Flush requested on b660f62c783cc997c23507e194da314b 2023-05-22 17:00:47,431 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing b660f62c783cc997c23507e194da314b 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-05-22 17:00:47,442 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=206 (bloomFilter=true), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/1493f99603a64ea294237880056ec932 2023-05-22 17:00:47,449 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/1493f99603a64ea294237880056ec932 as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/1493f99603a64ea294237880056ec932 2023-05-22 17:00:47,454 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/1493f99603a64ea294237880056ec932, entries=9, sequenceid=206, filesize=14.2 K 2023-05-22 17:00:47,455 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=19.96 KB/20444 for b660f62c783cc997c23507e194da314b in 24ms, sequenceid=206, compaction requested=false 2023-05-22 17:00:47,455 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for b660f62c783cc997c23507e194da314b: 2023-05-22 17:00:47,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] regionserver.HRegion(9158): Flush requested on b660f62c783cc997c23507e194da314b 2023-05-22 17:00:47,456 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing b660f62c783cc997c23507e194da314b 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-05-22 17:00:47,466 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=b660f62c783cc997c23507e194da314b, server=jenkins-hbase4.apache.org,32813,1684774785866 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-22 17:00:47,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] ipc.CallRunner(144): callId: 207 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:50176 deadline: 1684774857466, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=b660f62c783cc997c23507e194da314b, server=jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 17:00:47,467 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=229 (bloomFilter=true), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/ae01e7ce4a6745ba881fa63fa6ff86b7 2023-05-22 17:00:47,472 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/ae01e7ce4a6745ba881fa63fa6ff86b7 as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/ae01e7ce4a6745ba881fa63fa6ff86b7 2023-05-22 17:00:47,476 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/ae01e7ce4a6745ba881fa63fa6ff86b7, entries=20, sequenceid=229, filesize=25.8 K 2023-05-22 17:00:47,477 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=9.46 KB/9684 for b660f62c783cc997c23507e194da314b in 21ms, sequenceid=229, compaction requested=true 2023-05-22 17:00:47,477 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for b660f62c783cc997c23507e194da314b: 2023-05-22 17:00:47,477 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-22 17:00:47,477 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-22 17:00:47,478 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 144592 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-22 17:00:47,478 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1912): b660f62c783cc997c23507e194da314b/info is initiating minor compaction (all files) 2023-05-22 17:00:47,478 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of b660f62c783cc997c23507e194da314b/info in TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b. 2023-05-22 17:00:47,478 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/28a10f6fa25a499395a54ca1c31ee539, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/1493f99603a64ea294237880056ec932, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/ae01e7ce4a6745ba881fa63fa6ff86b7] into tmpdir=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp, totalSize=141.2 K 2023-05-22 17:00:47,479 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting 28a10f6fa25a499395a54ca1c31ee539, keycount=91, bloomtype=ROW, size=101.2 K, encoding=NONE, compression=NONE, seqNum=193, earliestPutTs=1684774809158 2023-05-22 17:00:47,479 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting 1493f99603a64ea294237880056ec932, keycount=9, bloomtype=ROW, size=14.2 K, encoding=NONE, compression=NONE, seqNum=206, earliestPutTs=1684774845421 2023-05-22 17:00:47,479 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting ae01e7ce4a6745ba881fa63fa6ff86b7, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=229, earliestPutTs=1684774847432 2023-05-22 17:00:47,489 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] throttle.PressureAwareThroughputController(145): b660f62c783cc997c23507e194da314b#info#compaction#46 average throughput is 61.57 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-22 17:00:47,497 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/71b6843551024426adf1ea24aac25d6d as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/71b6843551024426adf1ea24aac25d6d 2023-05-22 17:00:47,503 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in b660f62c783cc997c23507e194da314b/info of b660f62c783cc997c23507e194da314b into 71b6843551024426adf1ea24aac25d6d(size=131.9 K), total size for store is 131.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-22 17:00:47,503 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for b660f62c783cc997c23507e194da314b: 2023-05-22 17:00:47,503 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b., storeName=b660f62c783cc997c23507e194da314b/info, priority=13, startTime=1684774847477; duration=0sec 2023-05-22 17:00:47,503 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-22 17:00:57,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] regionserver.HRegion(9158): Flush requested on b660f62c783cc997c23507e194da314b 2023-05-22 17:00:57,560 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing b660f62c783cc997c23507e194da314b 1/1 column families, dataSize=10.51 KB heapSize=11.50 KB 2023-05-22 17:00:57,570 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.51 KB at sequenceid=243 (bloomFilter=true), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/b4d01229678249cc942dd0319d832f93 2023-05-22 17:00:57,576 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/b4d01229678249cc942dd0319d832f93 as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/b4d01229678249cc942dd0319d832f93 2023-05-22 17:00:57,581 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/b4d01229678249cc942dd0319d832f93, entries=10, sequenceid=243, filesize=15.3 K 2023-05-22 17:00:57,582 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.51 KB/10760, heapSize ~11.48 KB/11760, currentSize=1.05 KB/1076 for b660f62c783cc997c23507e194da314b in 22ms, sequenceid=243, compaction requested=false 2023-05-22 17:00:57,582 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for b660f62c783cc997c23507e194da314b: 2023-05-22 17:00:59,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] regionserver.HRegion(9158): Flush requested on b660f62c783cc997c23507e194da314b 2023-05-22 17:00:59,569 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing b660f62c783cc997c23507e194da314b 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-22 17:00:59,578 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=253 (bloomFilter=true), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/481a4a2dd04f4fbdb5f10b84d40a5612 2023-05-22 17:00:59,584 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/481a4a2dd04f4fbdb5f10b84d40a5612 as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/481a4a2dd04f4fbdb5f10b84d40a5612 2023-05-22 17:00:59,589 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/481a4a2dd04f4fbdb5f10b84d40a5612, entries=7, sequenceid=253, filesize=12.1 K 2023-05-22 17:00:59,590 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for b660f62c783cc997c23507e194da314b in 21ms, sequenceid=253, compaction requested=true 2023-05-22 17:00:59,590 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for b660f62c783cc997c23507e194da314b: 2023-05-22 17:00:59,590 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-22 17:00:59,590 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-22 17:00:59,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] regionserver.HRegion(9158): Flush requested on b660f62c783cc997c23507e194da314b 2023-05-22 17:00:59,590 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing b660f62c783cc997c23507e194da314b 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-05-22 17:00:59,591 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 163136 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-22 17:00:59,591 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1912): b660f62c783cc997c23507e194da314b/info is initiating minor compaction (all files) 2023-05-22 17:00:59,592 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of b660f62c783cc997c23507e194da314b/info in TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b. 2023-05-22 17:00:59,592 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/71b6843551024426adf1ea24aac25d6d, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/b4d01229678249cc942dd0319d832f93, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/481a4a2dd04f4fbdb5f10b84d40a5612] into tmpdir=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp, totalSize=159.3 K 2023-05-22 17:00:59,592 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting 71b6843551024426adf1ea24aac25d6d, keycount=120, bloomtype=ROW, size=131.9 K, encoding=NONE, compression=NONE, seqNum=229, earliestPutTs=1684774809158 2023-05-22 17:00:59,593 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting b4d01229678249cc942dd0319d832f93, keycount=10, bloomtype=ROW, size=15.3 K, encoding=NONE, compression=NONE, seqNum=243, earliestPutTs=1684774847457 2023-05-22 17:00:59,593 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting 481a4a2dd04f4fbdb5f10b84d40a5612, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=253, earliestPutTs=1684774857561 2023-05-22 17:00:59,602 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=276 (bloomFilter=true), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/d59354401ab24e63908369b968da0aff 2023-05-22 17:00:59,608 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/d59354401ab24e63908369b968da0aff as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/d59354401ab24e63908369b968da0aff 2023-05-22 17:00:59,608 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] throttle.PressureAwareThroughputController(145): b660f62c783cc997c23507e194da314b#info#compaction#50 average throughput is 70.29 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-22 17:00:59,617 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/d59354401ab24e63908369b968da0aff, entries=20, sequenceid=276, filesize=25.8 K 2023-05-22 17:00:59,618 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=6.30 KB/6456 for b660f62c783cc997c23507e194da314b in 28ms, sequenceid=276, compaction requested=false 2023-05-22 17:00:59,618 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for b660f62c783cc997c23507e194da314b: 2023-05-22 17:00:59,621 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/cd2be1813f784ecabdccda41f057d40a as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/cd2be1813f784ecabdccda41f057d40a 2023-05-22 17:00:59,626 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in b660f62c783cc997c23507e194da314b/info of b660f62c783cc997c23507e194da314b into cd2be1813f784ecabdccda41f057d40a(size=150.0 K), total size for store is 175.8 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-22 17:00:59,626 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for b660f62c783cc997c23507e194da314b: 2023-05-22 17:00:59,626 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b., storeName=b660f62c783cc997c23507e194da314b/info, priority=13, startTime=1684774859590; duration=0sec 2023-05-22 17:00:59,626 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-22 17:01:01,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] regionserver.HRegion(9158): Flush requested on b660f62c783cc997c23507e194da314b 2023-05-22 17:01:01,599 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing b660f62c783cc997c23507e194da314b 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-22 17:01:01,610 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=287 (bloomFilter=true), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/731a32d843424509b4eca2486a57304d 2023-05-22 17:01:01,616 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/731a32d843424509b4eca2486a57304d as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/731a32d843424509b4eca2486a57304d 2023-05-22 17:01:01,623 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/731a32d843424509b4eca2486a57304d, entries=7, sequenceid=287, filesize=12.1 K 2023-05-22 17:01:01,624 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=21.02 KB/21520 for b660f62c783cc997c23507e194da314b in 25ms, sequenceid=287, compaction requested=true 2023-05-22 17:01:01,624 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for b660f62c783cc997c23507e194da314b: 2023-05-22 17:01:01,624 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-22 17:01:01,624 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-22 17:01:01,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] regionserver.HRegion(9158): Flush requested on b660f62c783cc997c23507e194da314b 2023-05-22 17:01:01,625 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing b660f62c783cc997c23507e194da314b 1/1 column families, dataSize=22.07 KB heapSize=23.88 KB 2023-05-22 17:01:01,626 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 192463 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-22 17:01:01,626 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1912): b660f62c783cc997c23507e194da314b/info is initiating minor compaction (all files) 2023-05-22 17:01:01,626 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of b660f62c783cc997c23507e194da314b/info in TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b. 2023-05-22 17:01:01,626 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/cd2be1813f784ecabdccda41f057d40a, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/d59354401ab24e63908369b968da0aff, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/731a32d843424509b4eca2486a57304d] into tmpdir=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp, totalSize=188.0 K 2023-05-22 17:01:01,627 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting cd2be1813f784ecabdccda41f057d40a, keycount=137, bloomtype=ROW, size=150.0 K, encoding=NONE, compression=NONE, seqNum=253, earliestPutTs=1684774809158 2023-05-22 17:01:01,627 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting d59354401ab24e63908369b968da0aff, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=276, earliestPutTs=1684774859569 2023-05-22 17:01:01,628 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting 731a32d843424509b4eca2486a57304d, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=287, earliestPutTs=1684774859591 2023-05-22 17:01:01,638 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=b660f62c783cc997c23507e194da314b, server=jenkins-hbase4.apache.org,32813,1684774785866 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-22 17:01:01,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] ipc.CallRunner(144): callId: 273 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:50176 deadline: 1684774871638, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=b660f62c783cc997c23507e194da314b, server=jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 17:01:01,643 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] throttle.PressureAwareThroughputController(145): b660f62c783cc997c23507e194da314b#info#compaction#53 average throughput is 56.10 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-22 17:01:01,645 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.07 KB at sequenceid=311 (bloomFilter=true), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/6b8bdbc791594d29baf4cd519d620d56 2023-05-22 17:01:01,651 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/6b8bdbc791594d29baf4cd519d620d56 as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/6b8bdbc791594d29baf4cd519d620d56 2023-05-22 17:01:01,655 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/6b8bdbc791594d29baf4cd519d620d56, entries=21, sequenceid=311, filesize=26.9 K 2023-05-22 17:01:01,657 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~22.07 KB/22596, heapSize ~23.86 KB/24432, currentSize=8.41 KB/8608 for b660f62c783cc997c23507e194da314b in 32ms, sequenceid=311, compaction requested=false 2023-05-22 17:01:01,657 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for b660f62c783cc997c23507e194da314b: 2023-05-22 17:01:01,662 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/338a2cd3814b42f18b012ba39bf62eab as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/338a2cd3814b42f18b012ba39bf62eab 2023-05-22 17:01:01,667 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in b660f62c783cc997c23507e194da314b/info of b660f62c783cc997c23507e194da314b into 338a2cd3814b42f18b012ba39bf62eab(size=178.5 K), total size for store is 205.4 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-22 17:01:01,667 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for b660f62c783cc997c23507e194da314b: 2023-05-22 17:01:01,667 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b., storeName=b660f62c783cc997c23507e194da314b/info, priority=13, startTime=1684774861624; duration=0sec 2023-05-22 17:01:01,667 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-22 17:01:11,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32813] regionserver.HRegion(9158): Flush requested on b660f62c783cc997c23507e194da314b 2023-05-22 17:01:11,725 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing b660f62c783cc997c23507e194da314b 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-05-22 17:01:11,739 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=324 (bloomFilter=true), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/805b69ec02c9412cb3116056bfab5eeb 2023-05-22 17:01:11,745 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/805b69ec02c9412cb3116056bfab5eeb as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/805b69ec02c9412cb3116056bfab5eeb 2023-05-22 17:01:11,750 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/805b69ec02c9412cb3116056bfab5eeb, entries=9, sequenceid=324, filesize=14.2 K 2023-05-22 17:01:11,751 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=1.05 KB/1076 for b660f62c783cc997c23507e194da314b in 26ms, sequenceid=324, compaction requested=true 2023-05-22 17:01:11,751 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for b660f62c783cc997c23507e194da314b: 2023-05-22 17:01:11,751 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-22 17:01:11,751 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-22 17:01:11,752 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 224943 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-22 17:01:11,752 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1912): b660f62c783cc997c23507e194da314b/info is initiating minor compaction (all files) 2023-05-22 17:01:11,752 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of b660f62c783cc997c23507e194da314b/info in TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b. 2023-05-22 17:01:11,752 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/338a2cd3814b42f18b012ba39bf62eab, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/6b8bdbc791594d29baf4cd519d620d56, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/805b69ec02c9412cb3116056bfab5eeb] into tmpdir=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp, totalSize=219.7 K 2023-05-22 17:01:11,753 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting 338a2cd3814b42f18b012ba39bf62eab, keycount=164, bloomtype=ROW, size=178.5 K, encoding=NONE, compression=NONE, seqNum=287, earliestPutTs=1684774809158 2023-05-22 17:01:11,753 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting 6b8bdbc791594d29baf4cd519d620d56, keycount=21, bloomtype=ROW, size=26.9 K, encoding=NONE, compression=NONE, seqNum=311, earliestPutTs=1684774861599 2023-05-22 17:01:11,753 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] compactions.Compactor(207): Compacting 805b69ec02c9412cb3116056bfab5eeb, keycount=9, bloomtype=ROW, size=14.2 K, encoding=NONE, compression=NONE, seqNum=324, earliestPutTs=1684774861626 2023-05-22 17:01:11,765 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] throttle.PressureAwareThroughputController(145): b660f62c783cc997c23507e194da314b#info#compaction#55 average throughput is 66.36 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-22 17:01:11,775 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/4ccdedef1d384f07a23e749b36132910 as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/4ccdedef1d384f07a23e749b36132910 2023-05-22 17:01:11,780 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in b660f62c783cc997c23507e194da314b/info of b660f62c783cc997c23507e194da314b into 4ccdedef1d384f07a23e749b36132910(size=210.3 K), total size for store is 210.3 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-22 17:01:11,780 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for b660f62c783cc997c23507e194da314b: 2023-05-22 17:01:11,780 INFO [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b., storeName=b660f62c783cc997c23507e194da314b/info, priority=13, startTime=1684774871751; duration=0sec 2023-05-22 17:01:11,780 DEBUG [RS:0;jenkins-hbase4:32813-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-22 17:01:13,727 INFO [Listener at localhost/39181] wal.AbstractTestLogRolling(188): after writing there are 0 log files 2023-05-22 17:01:13,742 INFO [Listener at localhost/39181] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/WALs/jenkins-hbase4.apache.org,32813,1684774785866/jenkins-hbase4.apache.org%2C32813%2C1684774785866.1684774786247 with entries=311, filesize=307.65 KB; new WAL /user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/WALs/jenkins-hbase4.apache.org,32813,1684774785866/jenkins-hbase4.apache.org%2C32813%2C1684774785866.1684774873727 2023-05-22 17:01:13,742 DEBUG [Listener at localhost/39181] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37637,DS-497a699e-7a0b-4aaa-b587-71fd07ebcc52,DISK], DatanodeInfoWithStorage[127.0.0.1:34449,DS-a4abff20-5e75-4147-9ac0-3584bbb270d7,DISK]] 2023-05-22 17:01:13,742 DEBUG [Listener at localhost/39181] wal.AbstractFSWAL(716): hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/WALs/jenkins-hbase4.apache.org,32813,1684774785866/jenkins-hbase4.apache.org%2C32813%2C1684774785866.1684774786247 is not closed yet, will try archiving it next time 2023-05-22 17:01:13,748 INFO [Listener at localhost/39181] regionserver.HRegion(2745): Flushing b660f62c783cc997c23507e194da314b 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-22 17:01:13,756 INFO [Listener at localhost/39181] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=329 (bloomFilter=true), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/1e427dc41c9245618d8219b274644583 2023-05-22 17:01:13,760 DEBUG [Listener at localhost/39181] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/.tmp/info/1e427dc41c9245618d8219b274644583 as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/1e427dc41c9245618d8219b274644583 2023-05-22 17:01:13,765 INFO [Listener at localhost/39181] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/1e427dc41c9245618d8219b274644583, entries=1, sequenceid=329, filesize=5.8 K 2023-05-22 17:01:13,766 INFO [Listener at localhost/39181] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for b660f62c783cc997c23507e194da314b in 18ms, sequenceid=329, compaction requested=false 2023-05-22 17:01:13,766 DEBUG [Listener at localhost/39181] regionserver.HRegion(2446): Flush status journal for b660f62c783cc997c23507e194da314b: 2023-05-22 17:01:13,766 INFO [Listener at localhost/39181] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.26 KB heapSize=4.19 KB 2023-05-22 17:01:13,776 INFO [Listener at localhost/39181] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.26 KB at sequenceid=24 (bloomFilter=false), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740/.tmp/info/fb8f311bafdc4483a0e9185cd28d7f20 2023-05-22 17:01:13,780 DEBUG [Listener at localhost/39181] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740/.tmp/info/fb8f311bafdc4483a0e9185cd28d7f20 as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740/info/fb8f311bafdc4483a0e9185cd28d7f20 2023-05-22 17:01:13,784 INFO [Listener at localhost/39181] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740/info/fb8f311bafdc4483a0e9185cd28d7f20, entries=16, sequenceid=24, filesize=7.0 K 2023-05-22 17:01:13,785 INFO [Listener at localhost/39181] regionserver.HRegion(2948): Finished flush of dataSize ~2.26 KB/2312, heapSize ~3.67 KB/3760, currentSize=0 B/0 for 1588230740 in 19ms, sequenceid=24, compaction requested=false 2023-05-22 17:01:13,785 DEBUG [Listener at localhost/39181] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-22 17:01:13,785 INFO [Listener at localhost/39181] regionserver.HRegion(2745): Flushing 43b2845b87cd14d2606abdf4e671ea3a 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-22 17:01:13,795 INFO [Listener at localhost/39181] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/namespace/43b2845b87cd14d2606abdf4e671ea3a/.tmp/info/6525a8004d764a9ea79ff36e70304309 2023-05-22 17:01:13,801 DEBUG [Listener at localhost/39181] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/namespace/43b2845b87cd14d2606abdf4e671ea3a/.tmp/info/6525a8004d764a9ea79ff36e70304309 as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/namespace/43b2845b87cd14d2606abdf4e671ea3a/info/6525a8004d764a9ea79ff36e70304309 2023-05-22 17:01:13,805 INFO [Listener at localhost/39181] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/namespace/43b2845b87cd14d2606abdf4e671ea3a/info/6525a8004d764a9ea79ff36e70304309, entries=2, sequenceid=6, filesize=4.8 K 2023-05-22 17:01:13,806 INFO [Listener at localhost/39181] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 43b2845b87cd14d2606abdf4e671ea3a in 21ms, sequenceid=6, compaction requested=false 2023-05-22 17:01:13,807 DEBUG [Listener at localhost/39181] regionserver.HRegion(2446): Flush status journal for 43b2845b87cd14d2606abdf4e671ea3a: 2023-05-22 17:01:13,808 DEBUG [Listener at localhost/39181] regionserver.HRegion(2446): Flush status journal for 23fc1f6c9fd87ac6b51f5db09837184b: 2023-05-22 17:01:13,818 INFO [Listener at localhost/39181] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/WALs/jenkins-hbase4.apache.org,32813,1684774785866/jenkins-hbase4.apache.org%2C32813%2C1684774785866.1684774873727 with entries=4, filesize=1.22 KB; new WAL /user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/WALs/jenkins-hbase4.apache.org,32813,1684774785866/jenkins-hbase4.apache.org%2C32813%2C1684774785866.1684774873808 2023-05-22 17:01:13,818 DEBUG [Listener at localhost/39181] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34449,DS-a4abff20-5e75-4147-9ac0-3584bbb270d7,DISK], DatanodeInfoWithStorage[127.0.0.1:37637,DS-497a699e-7a0b-4aaa-b587-71fd07ebcc52,DISK]] 2023-05-22 17:01:13,818 DEBUG [Listener at localhost/39181] wal.AbstractFSWAL(716): hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/WALs/jenkins-hbase4.apache.org,32813,1684774785866/jenkins-hbase4.apache.org%2C32813%2C1684774785866.1684774873727 is not closed yet, will try archiving it next time 2023-05-22 17:01:13,819 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/WALs/jenkins-hbase4.apache.org,32813,1684774785866/jenkins-hbase4.apache.org%2C32813%2C1684774785866.1684774786247 to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/oldWALs/jenkins-hbase4.apache.org%2C32813%2C1684774785866.1684774786247 2023-05-22 17:01:13,820 INFO [Listener at localhost/39181] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-05-22 17:01:13,822 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/WALs/jenkins-hbase4.apache.org,32813,1684774785866/jenkins-hbase4.apache.org%2C32813%2C1684774785866.1684774873727 to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/oldWALs/jenkins-hbase4.apache.org%2C32813%2C1684774785866.1684774873727 2023-05-22 17:01:13,920 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-22 17:01:13,920 INFO [Listener at localhost/39181] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-22 17:01:13,920 DEBUG [Listener at localhost/39181] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x18e9f399 to 127.0.0.1:61632 2023-05-22 17:01:13,920 DEBUG [Listener at localhost/39181] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 17:01:13,920 DEBUG [Listener at localhost/39181] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-22 17:01:13,921 DEBUG [Listener at localhost/39181] util.JVMClusterUtil(257): Found active master hash=673372121, stopped=false 2023-05-22 17:01:13,921 INFO [Listener at localhost/39181] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,38619,1684774785821 2023-05-22 17:01:13,922 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-22 17:01:13,922 INFO [Listener at localhost/39181] procedure2.ProcedureExecutor(629): Stopping 2023-05-22 17:01:13,922 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): regionserver:32813-0x10053d5960c0001, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-22 17:01:13,923 DEBUG [Listener at localhost/39181] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7eae862b to 127.0.0.1:61632 2023-05-22 17:01:13,923 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 17:01:13,924 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 17:01:13,923 DEBUG [Listener at localhost/39181] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 17:01:13,924 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:32813-0x10053d5960c0001, quorum=127.0.0.1:61632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 17:01:13,924 INFO [Listener at localhost/39181] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,32813,1684774785866' ***** 2023-05-22 17:01:13,924 INFO [Listener at localhost/39181] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-22 17:01:13,924 INFO [RS:0;jenkins-hbase4:32813] regionserver.HeapMemoryManager(220): Stopping 2023-05-22 17:01:13,924 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-22 17:01:13,924 INFO [RS:0;jenkins-hbase4:32813] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-22 17:01:13,924 INFO [RS:0;jenkins-hbase4:32813] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-22 17:01:13,924 INFO [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(3303): Received CLOSE for b660f62c783cc997c23507e194da314b 2023-05-22 17:01:13,925 INFO [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(3303): Received CLOSE for 43b2845b87cd14d2606abdf4e671ea3a 2023-05-22 17:01:13,925 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b660f62c783cc997c23507e194da314b, disabling compactions & flushes 2023-05-22 17:01:13,925 INFO [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(3303): Received CLOSE for 23fc1f6c9fd87ac6b51f5db09837184b 2023-05-22 17:01:13,925 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b. 2023-05-22 17:01:13,925 INFO [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 17:01:13,925 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b. 2023-05-22 17:01:13,925 DEBUG [RS:0;jenkins-hbase4:32813] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0dd9780c to 127.0.0.1:61632 2023-05-22 17:01:13,925 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b. after waiting 0 ms 2023-05-22 17:01:13,925 DEBUG [RS:0;jenkins-hbase4:32813] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 17:01:13,925 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b. 2023-05-22 17:01:13,925 INFO [RS:0;jenkins-hbase4:32813] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-22 17:01:13,925 INFO [RS:0;jenkins-hbase4:32813] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-22 17:01:13,925 INFO [RS:0;jenkins-hbase4:32813] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-22 17:01:13,925 INFO [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-22 17:01:13,927 INFO [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-05-22 17:01:13,928 DEBUG [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1478): Online Regions={b660f62c783cc997c23507e194da314b=TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b., 1588230740=hbase:meta,,1.1588230740, 43b2845b87cd14d2606abdf4e671ea3a=hbase:namespace,,1684774786435.43b2845b87cd14d2606abdf4e671ea3a., 23fc1f6c9fd87ac6b51f5db09837184b=TestLogRolling-testLogRolling,,1684774811236.23fc1f6c9fd87ac6b51f5db09837184b.} 2023-05-22 17:01:13,931 DEBUG [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1504): Waiting on 1588230740, 23fc1f6c9fd87ac6b51f5db09837184b, 43b2845b87cd14d2606abdf4e671ea3a, b660f62c783cc997c23507e194da314b 2023-05-22 17:01:13,932 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-22 17:01:13,939 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-22 17:01:13,939 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-22 17:01:13,939 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-22 17:01:13,939 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-22 17:01:13,942 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/6d6fae9bfbd64708b7dc6f2e4e9c351f.c3b3adc06b17bd220cd47cee6fa68fc8->hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/6d6fae9bfbd64708b7dc6f2e4e9c351f-top, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/TestLogRolling-testLogRolling=c3b3adc06b17bd220cd47cee6fa68fc8-8ea728247cd340e09fdf51a3c3ccb974, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/d24780597e1a422ca7d2da9268959fee, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/TestLogRolling-testLogRolling=c3b3adc06b17bd220cd47cee6fa68fc8-228b6c0ae7e24eaf930de7bc34fd3ab6, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/6210121999444a33a522d1909255c1e5, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/b0e622f85ee54d49aa86c69814ad15eb, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/ffec83ebef7d49b8b4195b183fc47b87, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/b970a22471964634b3c58c36f6fee34b, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/28a10f6fa25a499395a54ca1c31ee539, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/68a03cbd775549c79944ab54c2e0c12a, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/1493f99603a64ea294237880056ec932, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/71b6843551024426adf1ea24aac25d6d, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/ae01e7ce4a6745ba881fa63fa6ff86b7, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/b4d01229678249cc942dd0319d832f93, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/cd2be1813f784ecabdccda41f057d40a, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/481a4a2dd04f4fbdb5f10b84d40a5612, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/d59354401ab24e63908369b968da0aff, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/338a2cd3814b42f18b012ba39bf62eab, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/731a32d843424509b4eca2486a57304d, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/6b8bdbc791594d29baf4cd519d620d56, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/805b69ec02c9412cb3116056bfab5eeb] to archive 2023-05-22 17:01:13,943 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-22 17:01:13,945 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/6d6fae9bfbd64708b7dc6f2e4e9c351f.c3b3adc06b17bd220cd47cee6fa68fc8 to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/6d6fae9bfbd64708b7dc6f2e4e9c351f.c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 17:01:13,946 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/meta/1588230740/recovered.edits/27.seqid, newMaxSeqId=27, maxSeqId=1 2023-05-22 17:01:13,946 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/TestLogRolling-testLogRolling=c3b3adc06b17bd220cd47cee6fa68fc8-8ea728247cd340e09fdf51a3c3ccb974 to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/TestLogRolling-testLogRolling=c3b3adc06b17bd220cd47cee6fa68fc8-8ea728247cd340e09fdf51a3c3ccb974 2023-05-22 17:01:13,947 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-22 17:01:13,947 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-22 17:01:13,947 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-22 17:01:13,947 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-22 17:01:13,948 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/d24780597e1a422ca7d2da9268959fee to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/d24780597e1a422ca7d2da9268959fee 2023-05-22 17:01:13,949 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/TestLogRolling-testLogRolling=c3b3adc06b17bd220cd47cee6fa68fc8-228b6c0ae7e24eaf930de7bc34fd3ab6 to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/TestLogRolling-testLogRolling=c3b3adc06b17bd220cd47cee6fa68fc8-228b6c0ae7e24eaf930de7bc34fd3ab6 2023-05-22 17:01:13,950 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/6210121999444a33a522d1909255c1e5 to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/6210121999444a33a522d1909255c1e5 2023-05-22 17:01:13,952 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/b0e622f85ee54d49aa86c69814ad15eb to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/b0e622f85ee54d49aa86c69814ad15eb 2023-05-22 17:01:13,953 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/ffec83ebef7d49b8b4195b183fc47b87 to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/ffec83ebef7d49b8b4195b183fc47b87 2023-05-22 17:01:13,954 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/b970a22471964634b3c58c36f6fee34b to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/b970a22471964634b3c58c36f6fee34b 2023-05-22 17:01:13,955 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/28a10f6fa25a499395a54ca1c31ee539 to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/28a10f6fa25a499395a54ca1c31ee539 2023-05-22 17:01:13,956 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/68a03cbd775549c79944ab54c2e0c12a to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/68a03cbd775549c79944ab54c2e0c12a 2023-05-22 17:01:13,957 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/1493f99603a64ea294237880056ec932 to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/1493f99603a64ea294237880056ec932 2023-05-22 17:01:13,959 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/71b6843551024426adf1ea24aac25d6d to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/71b6843551024426adf1ea24aac25d6d 2023-05-22 17:01:13,960 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/ae01e7ce4a6745ba881fa63fa6ff86b7 to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/ae01e7ce4a6745ba881fa63fa6ff86b7 2023-05-22 17:01:13,961 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/b4d01229678249cc942dd0319d832f93 to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/b4d01229678249cc942dd0319d832f93 2023-05-22 17:01:13,961 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/cd2be1813f784ecabdccda41f057d40a to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/cd2be1813f784ecabdccda41f057d40a 2023-05-22 17:01:13,963 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/481a4a2dd04f4fbdb5f10b84d40a5612 to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/481a4a2dd04f4fbdb5f10b84d40a5612 2023-05-22 17:01:13,964 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/d59354401ab24e63908369b968da0aff to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/d59354401ab24e63908369b968da0aff 2023-05-22 17:01:13,965 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/338a2cd3814b42f18b012ba39bf62eab to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/338a2cd3814b42f18b012ba39bf62eab 2023-05-22 17:01:13,966 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/731a32d843424509b4eca2486a57304d to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/731a32d843424509b4eca2486a57304d 2023-05-22 17:01:13,967 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/6b8bdbc791594d29baf4cd519d620d56 to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/6b8bdbc791594d29baf4cd519d620d56 2023-05-22 17:01:13,968 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/805b69ec02c9412cb3116056bfab5eeb to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/info/805b69ec02c9412cb3116056bfab5eeb 2023-05-22 17:01:13,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/b660f62c783cc997c23507e194da314b/recovered.edits/332.seqid, newMaxSeqId=332, maxSeqId=121 2023-05-22 17:01:13,976 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b. 2023-05-22 17:01:13,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b660f62c783cc997c23507e194da314b: 2023-05-22 17:01:13,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,row0062,1684774811236.b660f62c783cc997c23507e194da314b. 2023-05-22 17:01:13,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 43b2845b87cd14d2606abdf4e671ea3a, disabling compactions & flushes 2023-05-22 17:01:13,976 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684774786435.43b2845b87cd14d2606abdf4e671ea3a. 2023-05-22 17:01:13,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684774786435.43b2845b87cd14d2606abdf4e671ea3a. 2023-05-22 17:01:13,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684774786435.43b2845b87cd14d2606abdf4e671ea3a. after waiting 0 ms 2023-05-22 17:01:13,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684774786435.43b2845b87cd14d2606abdf4e671ea3a. 2023-05-22 17:01:13,979 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/hbase/namespace/43b2845b87cd14d2606abdf4e671ea3a/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-22 17:01:13,980 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684774786435.43b2845b87cd14d2606abdf4e671ea3a. 2023-05-22 17:01:13,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 43b2845b87cd14d2606abdf4e671ea3a: 2023-05-22 17:01:13,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1684774786435.43b2845b87cd14d2606abdf4e671ea3a. 2023-05-22 17:01:13,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 23fc1f6c9fd87ac6b51f5db09837184b, disabling compactions & flushes 2023-05-22 17:01:13,980 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1684774811236.23fc1f6c9fd87ac6b51f5db09837184b. 2023-05-22 17:01:13,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1684774811236.23fc1f6c9fd87ac6b51f5db09837184b. 2023-05-22 17:01:13,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1684774811236.23fc1f6c9fd87ac6b51f5db09837184b. after waiting 0 ms 2023-05-22 17:01:13,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1684774811236.23fc1f6c9fd87ac6b51f5db09837184b. 2023-05-22 17:01:13,980 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684774811236.23fc1f6c9fd87ac6b51f5db09837184b.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/23fc1f6c9fd87ac6b51f5db09837184b/info/6d6fae9bfbd64708b7dc6f2e4e9c351f.c3b3adc06b17bd220cd47cee6fa68fc8->hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/c3b3adc06b17bd220cd47cee6fa68fc8/info/6d6fae9bfbd64708b7dc6f2e4e9c351f-bottom] to archive 2023-05-22 17:01:13,981 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684774811236.23fc1f6c9fd87ac6b51f5db09837184b.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-22 17:01:13,982 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684774811236.23fc1f6c9fd87ac6b51f5db09837184b.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/23fc1f6c9fd87ac6b51f5db09837184b/info/6d6fae9bfbd64708b7dc6f2e4e9c351f.c3b3adc06b17bd220cd47cee6fa68fc8 to hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/archive/data/default/TestLogRolling-testLogRolling/23fc1f6c9fd87ac6b51f5db09837184b/info/6d6fae9bfbd64708b7dc6f2e4e9c351f.c3b3adc06b17bd220cd47cee6fa68fc8 2023-05-22 17:01:13,986 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/data/default/TestLogRolling-testLogRolling/23fc1f6c9fd87ac6b51f5db09837184b/recovered.edits/126.seqid, newMaxSeqId=126, maxSeqId=121 2023-05-22 17:01:13,986 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1684774811236.23fc1f6c9fd87ac6b51f5db09837184b. 2023-05-22 17:01:13,986 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 23fc1f6c9fd87ac6b51f5db09837184b: 2023-05-22 17:01:13,987 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,,1684774811236.23fc1f6c9fd87ac6b51f5db09837184b. 2023-05-22 17:01:14,125 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-22 17:01:14,138 INFO [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,32813,1684774785866; all regions closed. 2023-05-22 17:01:14,138 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/WALs/jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 17:01:14,144 DEBUG [RS:0;jenkins-hbase4:32813] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/oldWALs 2023-05-22 17:01:14,144 INFO [RS:0;jenkins-hbase4:32813] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C32813%2C1684774785866.meta:.meta(num 1684774786382) 2023-05-22 17:01:14,144 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/WALs/jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 17:01:14,149 DEBUG [RS:0;jenkins-hbase4:32813] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/oldWALs 2023-05-22 17:01:14,149 INFO [RS:0;jenkins-hbase4:32813] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C32813%2C1684774785866:(num 1684774873808) 2023-05-22 17:01:14,149 DEBUG [RS:0;jenkins-hbase4:32813] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 17:01:14,149 INFO [RS:0;jenkins-hbase4:32813] regionserver.LeaseManager(133): Closed leases 2023-05-22 17:01:14,149 INFO [RS:0;jenkins-hbase4:32813] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-22 17:01:14,149 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-22 17:01:14,150 INFO [RS:0;jenkins-hbase4:32813] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:32813 2023-05-22 17:01:14,152 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-22 17:01:14,152 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): regionserver:32813-0x10053d5960c0001, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32813,1684774785866 2023-05-22 17:01:14,152 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): regionserver:32813-0x10053d5960c0001, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-22 17:01:14,153 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,32813,1684774785866] 2023-05-22 17:01:14,153 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,32813,1684774785866; numProcessing=1 2023-05-22 17:01:14,155 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,32813,1684774785866 already deleted, retry=false 2023-05-22 17:01:14,156 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,32813,1684774785866 expired; onlineServers=0 2023-05-22 17:01:14,156 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,38619,1684774785821' ***** 2023-05-22 17:01:14,156 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-22 17:01:14,156 DEBUG [M:0;jenkins-hbase4:38619] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2c1d4605, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-22 17:01:14,156 INFO [M:0;jenkins-hbase4:38619] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38619,1684774785821 2023-05-22 17:01:14,156 INFO [M:0;jenkins-hbase4:38619] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38619,1684774785821; all regions closed. 2023-05-22 17:01:14,156 DEBUG [M:0;jenkins-hbase4:38619] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 17:01:14,156 DEBUG [M:0;jenkins-hbase4:38619] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-22 17:01:14,156 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-22 17:01:14,156 DEBUG [M:0;jenkins-hbase4:38619] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-22 17:01:14,156 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1684774786023] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1684774786023,5,FailOnTimeoutGroup] 2023-05-22 17:01:14,156 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1684774786023] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1684774786023,5,FailOnTimeoutGroup] 2023-05-22 17:01:14,157 INFO [M:0;jenkins-hbase4:38619] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-22 17:01:14,157 INFO [M:0;jenkins-hbase4:38619] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-22 17:01:14,157 INFO [M:0;jenkins-hbase4:38619] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-22 17:01:14,157 DEBUG [M:0;jenkins-hbase4:38619] master.HMaster(1512): Stopping service threads 2023-05-22 17:01:14,157 INFO [M:0;jenkins-hbase4:38619] procedure2.RemoteProcedureDispatcher(118): Stopping procedure remote dispatcher 2023-05-22 17:01:14,158 ERROR [M:0;jenkins-hbase4:38619] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-22 17:01:14,158 INFO [M:0;jenkins-hbase4:38619] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-22 17:01:14,158 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-22 17:01:14,158 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-22 17:01:14,158 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 17:01:14,158 DEBUG [M:0;jenkins-hbase4:38619] zookeeper.ZKUtil(398): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-22 17:01:14,158 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-22 17:01:14,158 WARN [M:0;jenkins-hbase4:38619] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-22 17:01:14,158 INFO [M:0;jenkins-hbase4:38619] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-22 17:01:14,159 INFO [M:0;jenkins-hbase4:38619] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-22 17:01:14,159 DEBUG [M:0;jenkins-hbase4:38619] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-22 17:01:14,159 INFO [M:0;jenkins-hbase4:38619] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 17:01:14,159 DEBUG [M:0;jenkins-hbase4:38619] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 17:01:14,159 DEBUG [M:0;jenkins-hbase4:38619] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-22 17:01:14,159 DEBUG [M:0;jenkins-hbase4:38619] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 17:01:14,159 INFO [M:0;jenkins-hbase4:38619] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=64.70 KB heapSize=78.42 KB 2023-05-22 17:01:14,168 INFO [M:0;jenkins-hbase4:38619] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=64.70 KB at sequenceid=160 (bloomFilter=true), to=hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/4b693143eac048d5940b11cdffc71d62 2023-05-22 17:01:14,173 INFO [M:0;jenkins-hbase4:38619] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4b693143eac048d5940b11cdffc71d62 2023-05-22 17:01:14,175 DEBUG [M:0;jenkins-hbase4:38619] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/4b693143eac048d5940b11cdffc71d62 as hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/4b693143eac048d5940b11cdffc71d62 2023-05-22 17:01:14,179 INFO [M:0;jenkins-hbase4:38619] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4b693143eac048d5940b11cdffc71d62 2023-05-22 17:01:14,179 INFO [M:0;jenkins-hbase4:38619] regionserver.HStore(1080): Added hdfs://localhost:36769/user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/4b693143eac048d5940b11cdffc71d62, entries=18, sequenceid=160, filesize=6.9 K 2023-05-22 17:01:14,180 INFO [M:0;jenkins-hbase4:38619] regionserver.HRegion(2948): Finished flush of dataSize ~64.70 KB/66256, heapSize ~78.41 KB/80288, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 21ms, sequenceid=160, compaction requested=false 2023-05-22 17:01:14,181 INFO [M:0;jenkins-hbase4:38619] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 17:01:14,181 DEBUG [M:0;jenkins-hbase4:38619] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-22 17:01:14,181 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/4a06edb1-9d1b-9ec6-6d47-dc2ad726096f/MasterData/WALs/jenkins-hbase4.apache.org,38619,1684774785821 2023-05-22 17:01:14,185 INFO [M:0;jenkins-hbase4:38619] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-22 17:01:14,185 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-22 17:01:14,185 INFO [M:0;jenkins-hbase4:38619] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38619 2023-05-22 17:01:14,187 DEBUG [M:0;jenkins-hbase4:38619] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,38619,1684774785821 already deleted, retry=false 2023-05-22 17:01:14,254 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): regionserver:32813-0x10053d5960c0001, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 17:01:14,254 INFO [RS:0;jenkins-hbase4:32813] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,32813,1684774785866; zookeeper connection closed. 2023-05-22 17:01:14,254 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): regionserver:32813-0x10053d5960c0001, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 17:01:14,254 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@799d7795] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@799d7795 2023-05-22 17:01:14,255 INFO [Listener at localhost/39181] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-22 17:01:14,354 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 17:01:14,354 INFO [M:0;jenkins-hbase4:38619] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38619,1684774785821; zookeeper connection closed. 2023-05-22 17:01:14,354 DEBUG [Listener at localhost/39181-EventThread] zookeeper.ZKWatcher(600): master:38619-0x10053d5960c0000, quorum=127.0.0.1:61632, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 17:01:14,355 WARN [Listener at localhost/39181] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-22 17:01:14,361 INFO [Listener at localhost/39181] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 17:01:14,466 WARN [BP-1376922240-172.31.14.131-1684774785257 heartbeating to localhost/127.0.0.1:36769] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-22 17:01:14,466 WARN [BP-1376922240-172.31.14.131-1684774785257 heartbeating to localhost/127.0.0.1:36769] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1376922240-172.31.14.131-1684774785257 (Datanode Uuid 0ee17268-cba2-4c4f-b605-f8248a399180) service to localhost/127.0.0.1:36769 2023-05-22 17:01:14,467 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/cluster_f701e79d-cb19-583a-f783-4b93d49d8892/dfs/data/data3/current/BP-1376922240-172.31.14.131-1684774785257] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 17:01:14,467 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/cluster_f701e79d-cb19-583a-f783-4b93d49d8892/dfs/data/data4/current/BP-1376922240-172.31.14.131-1684774785257] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 17:01:14,469 WARN [Listener at localhost/39181] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-22 17:01:14,473 INFO [Listener at localhost/39181] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 17:01:14,579 WARN [BP-1376922240-172.31.14.131-1684774785257 heartbeating to localhost/127.0.0.1:36769] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-22 17:01:14,579 WARN [BP-1376922240-172.31.14.131-1684774785257 heartbeating to localhost/127.0.0.1:36769] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1376922240-172.31.14.131-1684774785257 (Datanode Uuid 78cbd99a-b923-485c-85ac-848357f5bd3b) service to localhost/127.0.0.1:36769 2023-05-22 17:01:14,580 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/cluster_f701e79d-cb19-583a-f783-4b93d49d8892/dfs/data/data1/current/BP-1376922240-172.31.14.131-1684774785257] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 17:01:14,580 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/cluster_f701e79d-cb19-583a-f783-4b93d49d8892/dfs/data/data2/current/BP-1376922240-172.31.14.131-1684774785257] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 17:01:14,593 INFO [Listener at localhost/39181] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 17:01:14,708 INFO [Listener at localhost/39181] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-22 17:01:14,737 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-22 17:01:14,748 INFO [Listener at localhost/39181] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRolling Thread=105 (was 93) - Thread LEAK? -, OpenFileDescriptor=538 (was 497) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=51 (was 59), ProcessCount=168 (was 168), AvailableMemoryMB=4742 (was 4961) 2023-05-22 17:01:14,756 INFO [Listener at localhost/39181] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=105, OpenFileDescriptor=538, MaxFileDescriptor=60000, SystemLoadAverage=51, ProcessCount=168, AvailableMemoryMB=4742 2023-05-22 17:01:14,757 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-22 17:01:14,757 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/hadoop.log.dir so I do NOT create it in target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876 2023-05-22 17:01:14,757 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a491f768-4589-6add-f3e9-7982077e3094/hadoop.tmp.dir so I do NOT create it in target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876 2023-05-22 17:01:14,757 INFO [Listener at localhost/39181] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/cluster_1e506279-1d03-9055-fae0-4252d4dc6d7e, deleteOnExit=true 2023-05-22 17:01:14,757 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-22 17:01:14,757 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/test.cache.data in system properties and HBase conf 2023-05-22 17:01:14,757 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/hadoop.tmp.dir in system properties and HBase conf 2023-05-22 17:01:14,757 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/hadoop.log.dir in system properties and HBase conf 2023-05-22 17:01:14,757 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-22 17:01:14,757 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-22 17:01:14,758 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-22 17:01:14,758 DEBUG [Listener at localhost/39181] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-22 17:01:14,758 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-22 17:01:14,758 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-22 17:01:14,758 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-22 17:01:14,758 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-22 17:01:14,758 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-22 17:01:14,758 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-22 17:01:14,759 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-22 17:01:14,759 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-22 17:01:14,759 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-22 17:01:14,759 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/nfs.dump.dir in system properties and HBase conf 2023-05-22 17:01:14,759 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/java.io.tmpdir in system properties and HBase conf 2023-05-22 17:01:14,759 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-22 17:01:14,759 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-22 17:01:14,759 INFO [Listener at localhost/39181] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-22 17:01:14,761 WARN [Listener at localhost/39181] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-22 17:01:14,786 WARN [Listener at localhost/39181] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-22 17:01:14,786 WARN [Listener at localhost/39181] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-22 17:01:14,825 WARN [Listener at localhost/39181] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 17:01:14,827 INFO [Listener at localhost/39181] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 17:01:14,831 INFO [Listener at localhost/39181] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/java.io.tmpdir/Jetty_localhost_35689_hdfs____cxfikf/webapp 2023-05-22 17:01:14,922 INFO [Listener at localhost/39181] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35689 2023-05-22 17:01:14,923 WARN [Listener at localhost/39181] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-22 17:01:14,926 WARN [Listener at localhost/39181] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-22 17:01:14,926 WARN [Listener at localhost/39181] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-22 17:01:14,962 WARN [Listener at localhost/44785] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 17:01:14,979 WARN [Listener at localhost/44785] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-22 17:01:14,981 WARN [Listener at localhost/44785] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 17:01:14,982 INFO [Listener at localhost/44785] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 17:01:14,987 INFO [Listener at localhost/44785] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/java.io.tmpdir/Jetty_localhost_35743_datanode____4197p7/webapp 2023-05-22 17:01:15,076 INFO [Listener at localhost/44785] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35743 2023-05-22 17:01:15,085 WARN [Listener at localhost/44073] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 17:01:15,102 WARN [Listener at localhost/44073] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-22 17:01:15,104 WARN [Listener at localhost/44073] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-22 17:01:15,106 INFO [Listener at localhost/44073] log.Slf4jLog(67): jetty-6.1.26 2023-05-22 17:01:15,112 INFO [Listener at localhost/44073] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/java.io.tmpdir/Jetty_localhost_34243_datanode____tvni8n/webapp 2023-05-22 17:01:15,195 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x82e15a52bf11da43: Processing first storage report for DS-a35874dd-8459-4c8b-8ee1-1cfec134530a from datanode e2df3fc5-40c3-41b9-8e26-75e1ef20fba0 2023-05-22 17:01:15,195 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x82e15a52bf11da43: from storage DS-a35874dd-8459-4c8b-8ee1-1cfec134530a node DatanodeRegistration(127.0.0.1:44369, datanodeUuid=e2df3fc5-40c3-41b9-8e26-75e1ef20fba0, infoPort=42685, infoSecurePort=0, ipcPort=44073, storageInfo=lv=-57;cid=testClusterID;nsid=1639916813;c=1684774874789), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 17:01:15,195 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x82e15a52bf11da43: Processing first storage report for DS-479caa5c-de8c-416f-8052-ea12b18527a9 from datanode e2df3fc5-40c3-41b9-8e26-75e1ef20fba0 2023-05-22 17:01:15,195 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x82e15a52bf11da43: from storage DS-479caa5c-de8c-416f-8052-ea12b18527a9 node DatanodeRegistration(127.0.0.1:44369, datanodeUuid=e2df3fc5-40c3-41b9-8e26-75e1ef20fba0, infoPort=42685, infoSecurePort=0, ipcPort=44073, storageInfo=lv=-57;cid=testClusterID;nsid=1639916813;c=1684774874789), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 17:01:15,220 INFO [Listener at localhost/44073] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34243 2023-05-22 17:01:15,225 WARN [Listener at localhost/38851] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-22 17:01:15,314 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x31f4f49cadf2eae2: Processing first storage report for DS-f456166b-b549-401d-8457-03c62b7ccded from datanode 0b9dd38d-3979-4d49-96fe-9313c0ec6368 2023-05-22 17:01:15,314 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x31f4f49cadf2eae2: from storage DS-f456166b-b549-401d-8457-03c62b7ccded node DatanodeRegistration(127.0.0.1:44959, datanodeUuid=0b9dd38d-3979-4d49-96fe-9313c0ec6368, infoPort=36533, infoSecurePort=0, ipcPort=38851, storageInfo=lv=-57;cid=testClusterID;nsid=1639916813;c=1684774874789), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 17:01:15,314 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x31f4f49cadf2eae2: Processing first storage report for DS-33efef61-4611-492b-ac8d-b6f101a40a18 from datanode 0b9dd38d-3979-4d49-96fe-9313c0ec6368 2023-05-22 17:01:15,314 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x31f4f49cadf2eae2: from storage DS-33efef61-4611-492b-ac8d-b6f101a40a18 node DatanodeRegistration(127.0.0.1:44959, datanodeUuid=0b9dd38d-3979-4d49-96fe-9313c0ec6368, infoPort=36533, infoSecurePort=0, ipcPort=38851, storageInfo=lv=-57;cid=testClusterID;nsid=1639916813;c=1684774874789), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-22 17:01:15,335 DEBUG [Listener at localhost/38851] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876 2023-05-22 17:01:15,337 INFO [Listener at localhost/38851] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/cluster_1e506279-1d03-9055-fae0-4252d4dc6d7e/zookeeper_0, clientPort=52365, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/cluster_1e506279-1d03-9055-fae0-4252d4dc6d7e/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/cluster_1e506279-1d03-9055-fae0-4252d4dc6d7e/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-22 17:01:15,338 INFO [Listener at localhost/38851] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=52365 2023-05-22 17:01:15,338 INFO [Listener at localhost/38851] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 17:01:15,339 INFO [Listener at localhost/38851] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 17:01:15,351 INFO [Listener at localhost/38851] util.FSUtils(471): Created version file at hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577 with version=8 2023-05-22 17:01:15,351 INFO [Listener at localhost/38851] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:37047/user/jenkins/test-data/a5ea5a06-3894-1be6-0d42-1701b18dfc53/hbase-staging 2023-05-22 17:01:15,352 INFO [Listener at localhost/38851] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-22 17:01:15,353 INFO [Listener at localhost/38851] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 17:01:15,353 INFO [Listener at localhost/38851] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-22 17:01:15,353 INFO [Listener at localhost/38851] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-22 17:01:15,353 INFO [Listener at localhost/38851] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 17:01:15,353 INFO [Listener at localhost/38851] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-22 17:01:15,353 INFO [Listener at localhost/38851] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-22 17:01:15,354 INFO [Listener at localhost/38851] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41393 2023-05-22 17:01:15,354 INFO [Listener at localhost/38851] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 17:01:15,355 INFO [Listener at localhost/38851] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 17:01:15,356 INFO [Listener at localhost/38851] zookeeper.RecoverableZooKeeper(93): Process identifier=master:41393 connecting to ZooKeeper ensemble=127.0.0.1:52365 2023-05-22 17:01:15,362 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:413930x0, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-22 17:01:15,362 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:41393-0x10053d6f3cc0000 connected 2023-05-22 17:01:15,376 DEBUG [Listener at localhost/38851] zookeeper.ZKUtil(164): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-22 17:01:15,376 DEBUG [Listener at localhost/38851] zookeeper.ZKUtil(164): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 17:01:15,376 DEBUG [Listener at localhost/38851] zookeeper.ZKUtil(164): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-22 17:01:15,377 DEBUG [Listener at localhost/38851] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41393 2023-05-22 17:01:15,377 DEBUG [Listener at localhost/38851] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41393 2023-05-22 17:01:15,377 DEBUG [Listener at localhost/38851] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41393 2023-05-22 17:01:15,377 DEBUG [Listener at localhost/38851] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41393 2023-05-22 17:01:15,377 DEBUG [Listener at localhost/38851] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41393 2023-05-22 17:01:15,378 INFO [Listener at localhost/38851] master.HMaster(444): hbase.rootdir=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577, hbase.cluster.distributed=false 2023-05-22 17:01:15,390 INFO [Listener at localhost/38851] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-22 17:01:15,390 INFO [Listener at localhost/38851] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 17:01:15,391 INFO [Listener at localhost/38851] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-22 17:01:15,391 INFO [Listener at localhost/38851] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-22 17:01:15,391 INFO [Listener at localhost/38851] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-22 17:01:15,391 INFO [Listener at localhost/38851] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-22 17:01:15,391 INFO [Listener at localhost/38851] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-22 17:01:15,392 INFO [Listener at localhost/38851] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38403 2023-05-22 17:01:15,393 INFO [Listener at localhost/38851] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-22 17:01:15,396 DEBUG [Listener at localhost/38851] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-22 17:01:15,396 INFO [Listener at localhost/38851] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 17:01:15,398 INFO [Listener at localhost/38851] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 17:01:15,399 INFO [Listener at localhost/38851] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38403 connecting to ZooKeeper ensemble=127.0.0.1:52365 2023-05-22 17:01:15,401 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): regionserver:384030x0, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-22 17:01:15,402 DEBUG [Listener at localhost/38851] zookeeper.ZKUtil(164): regionserver:384030x0, quorum=127.0.0.1:52365, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-22 17:01:15,403 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38403-0x10053d6f3cc0001 connected 2023-05-22 17:01:15,403 DEBUG [Listener at localhost/38851] zookeeper.ZKUtil(164): regionserver:38403-0x10053d6f3cc0001, quorum=127.0.0.1:52365, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 17:01:15,404 DEBUG [Listener at localhost/38851] zookeeper.ZKUtil(164): regionserver:38403-0x10053d6f3cc0001, quorum=127.0.0.1:52365, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-22 17:01:15,404 DEBUG [Listener at localhost/38851] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38403 2023-05-22 17:01:15,404 DEBUG [Listener at localhost/38851] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38403 2023-05-22 17:01:15,405 DEBUG [Listener at localhost/38851] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38403 2023-05-22 17:01:15,405 DEBUG [Listener at localhost/38851] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38403 2023-05-22 17:01:15,406 DEBUG [Listener at localhost/38851] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38403 2023-05-22 17:01:15,406 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,41393,1684774875352 2023-05-22 17:01:15,414 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-22 17:01:15,414 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,41393,1684774875352 2023-05-22 17:01:15,416 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): regionserver:38403-0x10053d6f3cc0001, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-22 17:01:15,416 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-22 17:01:15,416 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 17:01:15,417 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-22 17:01:15,417 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,41393,1684774875352 from backup master directory 2023-05-22 17:01:15,417 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-22 17:01:15,418 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,41393,1684774875352 2023-05-22 17:01:15,418 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-22 17:01:15,418 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-22 17:01:15,418 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,41393,1684774875352 2023-05-22 17:01:15,430 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/hbase.id with ID: 29f10909-64de-47a5-be74-a71549c3bae7 2023-05-22 17:01:15,439 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 17:01:15,441 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 17:01:15,447 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x6ec42f1e to 127.0.0.1:52365 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-22 17:01:15,452 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@66ff730d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-22 17:01:15,452 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-22 17:01:15,453 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-22 17:01:15,453 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-22 17:01:15,454 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/MasterData/data/master/store-tmp 2023-05-22 17:01:15,460 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 17:01:15,460 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-22 17:01:15,460 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 17:01:15,460 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 17:01:15,460 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-22 17:01:15,460 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 17:01:15,460 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 17:01:15,460 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-22 17:01:15,460 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/MasterData/WALs/jenkins-hbase4.apache.org,41393,1684774875352 2023-05-22 17:01:15,462 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41393%2C1684774875352, suffix=, logDir=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/MasterData/WALs/jenkins-hbase4.apache.org,41393,1684774875352, archiveDir=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/MasterData/oldWALs, maxLogs=10 2023-05-22 17:01:15,467 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/MasterData/WALs/jenkins-hbase4.apache.org,41393,1684774875352/jenkins-hbase4.apache.org%2C41393%2C1684774875352.1684774875463 2023-05-22 17:01:15,467 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44369,DS-a35874dd-8459-4c8b-8ee1-1cfec134530a,DISK], DatanodeInfoWithStorage[127.0.0.1:44959,DS-f456166b-b549-401d-8457-03c62b7ccded,DISK]] 2023-05-22 17:01:15,467 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-22 17:01:15,467 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 17:01:15,467 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-22 17:01:15,467 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-22 17:01:15,469 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-22 17:01:15,470 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-22 17:01:15,470 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-22 17:01:15,470 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 17:01:15,471 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-22 17:01:15,471 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-22 17:01:15,473 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-22 17:01:15,475 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-22 17:01:15,475 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=805284, jitterRate=0.023971587419509888}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-22 17:01:15,475 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-22 17:01:15,476 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-22 17:01:15,476 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-22 17:01:15,476 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-22 17:01:15,476 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-22 17:01:15,477 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-22 17:01:15,477 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-22 17:01:15,477 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(95): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-22 17:01:15,478 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-22 17:01:15,479 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-22 17:01:15,489 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-22 17:01:15,490 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-22 17:01:15,490 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-22 17:01:15,490 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-22 17:01:15,490 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-22 17:01:15,492 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 17:01:15,493 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-22 17:01:15,493 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-22 17:01:15,494 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-22 17:01:15,495 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-22 17:01:15,495 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): regionserver:38403-0x10053d6f3cc0001, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-22 17:01:15,495 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 17:01:15,495 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,41393,1684774875352, sessionid=0x10053d6f3cc0000, setting cluster-up flag (Was=false) 2023-05-22 17:01:15,500 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 17:01:15,504 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-22 17:01:15,504 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41393,1684774875352 2023-05-22 17:01:15,508 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 17:01:15,512 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-22 17:01:15,513 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41393,1684774875352 2023-05-22 17:01:15,513 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/.hbase-snapshot/.tmp 2023-05-22 17:01:15,516 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-22 17:01:15,516 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-22 17:01:15,516 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-22 17:01:15,516 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-22 17:01:15,516 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-22 17:01:15,516 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-22 17:01:15,516 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 17:01:15,516 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-22 17:01:15,516 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 17:01:15,519 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1684774905519 2023-05-22 17:01:15,519 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-22 17:01:15,519 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-22 17:01:15,519 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-22 17:01:15,519 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-22 17:01:15,519 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-22 17:01:15,519 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-22 17:01:15,520 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-22 17:01:15,520 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-22 17:01:15,520 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-22 17:01:15,520 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-22 17:01:15,520 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-22 17:01:15,520 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-22 17:01:15,521 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-22 17:01:15,521 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-22 17:01:15,521 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-22 17:01:15,521 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1684774875521,5,FailOnTimeoutGroup] 2023-05-22 17:01:15,521 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1684774875521,5,FailOnTimeoutGroup] 2023-05-22 17:01:15,521 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-22 17:01:15,521 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-22 17:01:15,522 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-22 17:01:15,522 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-22 17:01:15,528 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-22 17:01:15,529 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-22 17:01:15,529 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577 2023-05-22 17:01:15,536 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 17:01:15,537 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-22 17:01:15,538 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/1588230740/info 2023-05-22 17:01:15,539 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-22 17:01:15,539 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 17:01:15,540 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-22 17:01:15,541 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/1588230740/rep_barrier 2023-05-22 17:01:15,541 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-22 17:01:15,542 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 17:01:15,542 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-22 17:01:15,543 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/1588230740/table 2023-05-22 17:01:15,543 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-22 17:01:15,544 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 17:01:15,544 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/1588230740 2023-05-22 17:01:15,545 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/1588230740 2023-05-22 17:01:15,546 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-22 17:01:15,548 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-22 17:01:15,550 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-22 17:01:15,550 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=815348, jitterRate=0.03676976263523102}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-22 17:01:15,550 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-22 17:01:15,550 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-22 17:01:15,550 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-22 17:01:15,550 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-22 17:01:15,550 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-22 17:01:15,550 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-22 17:01:15,551 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-22 17:01:15,551 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-22 17:01:15,552 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-22 17:01:15,552 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-22 17:01:15,552 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-22 17:01:15,553 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-22 17:01:15,554 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-22 17:01:15,607 INFO [RS:0;jenkins-hbase4:38403] regionserver.HRegionServer(951): ClusterId : 29f10909-64de-47a5-be74-a71549c3bae7 2023-05-22 17:01:15,608 DEBUG [RS:0;jenkins-hbase4:38403] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-22 17:01:15,610 DEBUG [RS:0;jenkins-hbase4:38403] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-22 17:01:15,610 DEBUG [RS:0;jenkins-hbase4:38403] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-22 17:01:15,612 DEBUG [RS:0;jenkins-hbase4:38403] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-22 17:01:15,613 DEBUG [RS:0;jenkins-hbase4:38403] zookeeper.ReadOnlyZKClient(139): Connect 0x44218bad to 127.0.0.1:52365 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-22 17:01:15,616 DEBUG [RS:0;jenkins-hbase4:38403] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4ae67d59, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-22 17:01:15,616 DEBUG [RS:0;jenkins-hbase4:38403] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4ed9e0dd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-22 17:01:15,625 DEBUG [RS:0;jenkins-hbase4:38403] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:38403 2023-05-22 17:01:15,625 INFO [RS:0;jenkins-hbase4:38403] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-22 17:01:15,625 INFO [RS:0;jenkins-hbase4:38403] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-22 17:01:15,625 DEBUG [RS:0;jenkins-hbase4:38403] regionserver.HRegionServer(1022): About to register with Master. 2023-05-22 17:01:15,625 INFO [RS:0;jenkins-hbase4:38403] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,41393,1684774875352 with isa=jenkins-hbase4.apache.org/172.31.14.131:38403, startcode=1684774875390 2023-05-22 17:01:15,626 DEBUG [RS:0;jenkins-hbase4:38403] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-22 17:01:15,628 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36235, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-05-22 17:01:15,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41393] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38403,1684774875390 2023-05-22 17:01:15,630 DEBUG [RS:0;jenkins-hbase4:38403] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577 2023-05-22 17:01:15,630 DEBUG [RS:0;jenkins-hbase4:38403] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44785 2023-05-22 17:01:15,630 DEBUG [RS:0;jenkins-hbase4:38403] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-22 17:01:15,632 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-22 17:01:15,632 DEBUG [RS:0;jenkins-hbase4:38403] zookeeper.ZKUtil(162): regionserver:38403-0x10053d6f3cc0001, quorum=127.0.0.1:52365, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38403,1684774875390 2023-05-22 17:01:15,633 WARN [RS:0;jenkins-hbase4:38403] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-22 17:01:15,633 INFO [RS:0;jenkins-hbase4:38403] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-22 17:01:15,633 DEBUG [RS:0;jenkins-hbase4:38403] regionserver.HRegionServer(1946): logDir=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/WALs/jenkins-hbase4.apache.org,38403,1684774875390 2023-05-22 17:01:15,633 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38403,1684774875390] 2023-05-22 17:01:15,636 DEBUG [RS:0;jenkins-hbase4:38403] zookeeper.ZKUtil(162): regionserver:38403-0x10053d6f3cc0001, quorum=127.0.0.1:52365, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38403,1684774875390 2023-05-22 17:01:15,636 DEBUG [RS:0;jenkins-hbase4:38403] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-22 17:01:15,637 INFO [RS:0;jenkins-hbase4:38403] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-22 17:01:15,638 INFO [RS:0;jenkins-hbase4:38403] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-22 17:01:15,638 INFO [RS:0;jenkins-hbase4:38403] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-22 17:01:15,638 INFO [RS:0;jenkins-hbase4:38403] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-22 17:01:15,638 INFO [RS:0;jenkins-hbase4:38403] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-22 17:01:15,639 INFO [RS:0;jenkins-hbase4:38403] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-22 17:01:15,639 DEBUG [RS:0;jenkins-hbase4:38403] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 17:01:15,640 DEBUG [RS:0;jenkins-hbase4:38403] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 17:01:15,640 DEBUG [RS:0;jenkins-hbase4:38403] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 17:01:15,640 DEBUG [RS:0;jenkins-hbase4:38403] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 17:01:15,640 DEBUG [RS:0;jenkins-hbase4:38403] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 17:01:15,640 DEBUG [RS:0;jenkins-hbase4:38403] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-22 17:01:15,640 DEBUG [RS:0;jenkins-hbase4:38403] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 17:01:15,640 DEBUG [RS:0;jenkins-hbase4:38403] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 17:01:15,640 DEBUG [RS:0;jenkins-hbase4:38403] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 17:01:15,640 DEBUG [RS:0;jenkins-hbase4:38403] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-22 17:01:15,640 INFO [RS:0;jenkins-hbase4:38403] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-22 17:01:15,640 INFO [RS:0;jenkins-hbase4:38403] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-22 17:01:15,640 INFO [RS:0;jenkins-hbase4:38403] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-22 17:01:15,651 INFO [RS:0;jenkins-hbase4:38403] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-22 17:01:15,651 INFO [RS:0;jenkins-hbase4:38403] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38403,1684774875390-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-22 17:01:15,662 INFO [RS:0;jenkins-hbase4:38403] regionserver.Replication(203): jenkins-hbase4.apache.org,38403,1684774875390 started 2023-05-22 17:01:15,662 INFO [RS:0;jenkins-hbase4:38403] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38403,1684774875390, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38403, sessionid=0x10053d6f3cc0001 2023-05-22 17:01:15,662 DEBUG [RS:0;jenkins-hbase4:38403] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-22 17:01:15,662 DEBUG [RS:0;jenkins-hbase4:38403] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38403,1684774875390 2023-05-22 17:01:15,662 DEBUG [RS:0;jenkins-hbase4:38403] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38403,1684774875390' 2023-05-22 17:01:15,662 DEBUG [RS:0;jenkins-hbase4:38403] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-22 17:01:15,663 DEBUG [RS:0;jenkins-hbase4:38403] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-22 17:01:15,663 DEBUG [RS:0;jenkins-hbase4:38403] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-22 17:01:15,663 DEBUG [RS:0;jenkins-hbase4:38403] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-22 17:01:15,663 DEBUG [RS:0;jenkins-hbase4:38403] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38403,1684774875390 2023-05-22 17:01:15,663 DEBUG [RS:0;jenkins-hbase4:38403] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38403,1684774875390' 2023-05-22 17:01:15,663 DEBUG [RS:0;jenkins-hbase4:38403] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-22 17:01:15,663 DEBUG [RS:0;jenkins-hbase4:38403] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-22 17:01:15,663 DEBUG [RS:0;jenkins-hbase4:38403] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-22 17:01:15,663 INFO [RS:0;jenkins-hbase4:38403] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-22 17:01:15,663 INFO [RS:0;jenkins-hbase4:38403] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-22 17:01:15,705 DEBUG [jenkins-hbase4:41393] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-22 17:01:15,705 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38403,1684774875390, state=OPENING 2023-05-22 17:01:15,707 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-22 17:01:15,708 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 17:01:15,708 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38403,1684774875390}] 2023-05-22 17:01:15,708 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-22 17:01:15,765 INFO [RS:0;jenkins-hbase4:38403] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38403%2C1684774875390, suffix=, logDir=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/WALs/jenkins-hbase4.apache.org,38403,1684774875390, archiveDir=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/oldWALs, maxLogs=32 2023-05-22 17:01:15,773 INFO [RS:0;jenkins-hbase4:38403] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/WALs/jenkins-hbase4.apache.org,38403,1684774875390/jenkins-hbase4.apache.org%2C38403%2C1684774875390.1684774875766 2023-05-22 17:01:15,774 DEBUG [RS:0;jenkins-hbase4:38403] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44369,DS-a35874dd-8459-4c8b-8ee1-1cfec134530a,DISK], DatanodeInfoWithStorage[127.0.0.1:44959,DS-f456166b-b549-401d-8457-03c62b7ccded,DISK]] 2023-05-22 17:01:15,862 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38403,1684774875390 2023-05-22 17:01:15,862 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-22 17:01:15,864 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56490, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-22 17:01:15,868 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-22 17:01:15,868 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-22 17:01:15,869 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38403%2C1684774875390.meta, suffix=.meta, logDir=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/WALs/jenkins-hbase4.apache.org,38403,1684774875390, archiveDir=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/oldWALs, maxLogs=32 2023-05-22 17:01:15,877 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/WALs/jenkins-hbase4.apache.org,38403,1684774875390/jenkins-hbase4.apache.org%2C38403%2C1684774875390.meta.1684774875870.meta 2023-05-22 17:01:15,877 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44369,DS-a35874dd-8459-4c8b-8ee1-1cfec134530a,DISK], DatanodeInfoWithStorage[127.0.0.1:44959,DS-f456166b-b549-401d-8457-03c62b7ccded,DISK]] 2023-05-22 17:01:15,877 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-22 17:01:15,878 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-22 17:01:15,878 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-22 17:01:15,878 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-22 17:01:15,878 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-22 17:01:15,878 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 17:01:15,878 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-22 17:01:15,878 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-22 17:01:15,879 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-22 17:01:15,880 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/1588230740/info 2023-05-22 17:01:15,880 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/1588230740/info 2023-05-22 17:01:15,880 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-22 17:01:15,881 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 17:01:15,881 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-22 17:01:15,882 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/1588230740/rep_barrier 2023-05-22 17:01:15,882 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/1588230740/rep_barrier 2023-05-22 17:01:15,882 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-22 17:01:15,883 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 17:01:15,883 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-22 17:01:15,884 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/1588230740/table 2023-05-22 17:01:15,884 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/1588230740/table 2023-05-22 17:01:15,885 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-22 17:01:15,885 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 17:01:15,886 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/1588230740 2023-05-22 17:01:15,887 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/1588230740 2023-05-22 17:01:15,889 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-22 17:01:15,890 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-22 17:01:15,890 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=827169, jitterRate=0.05180095136165619}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-22 17:01:15,891 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-22 17:01:15,894 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1684774875862 2023-05-22 17:01:15,898 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-22 17:01:15,899 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-22 17:01:15,899 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38403,1684774875390, state=OPEN 2023-05-22 17:01:15,902 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-22 17:01:15,902 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-22 17:01:15,904 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-22 17:01:15,904 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38403,1684774875390 in 194 msec 2023-05-22 17:01:15,905 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-22 17:01:15,906 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 352 msec 2023-05-22 17:01:15,907 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 392 msec 2023-05-22 17:01:15,907 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1684774875907, completionTime=-1 2023-05-22 17:01:15,907 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-22 17:01:15,907 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-22 17:01:15,910 DEBUG [hconnection-0x78d5f67d-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-22 17:01:15,912 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56504, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-22 17:01:15,913 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-22 17:01:15,913 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1684774935913 2023-05-22 17:01:15,913 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1684774995913 2023-05-22 17:01:15,913 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-22 17:01:15,922 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41393,1684774875352-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-22 17:01:15,922 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41393,1684774875352-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-22 17:01:15,922 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41393,1684774875352-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-22 17:01:15,922 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:41393, period=300000, unit=MILLISECONDS is enabled. 2023-05-22 17:01:15,922 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-22 17:01:15,922 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-22 17:01:15,922 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-22 17:01:15,923 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-22 17:01:15,924 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-22 17:01:15,925 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-22 17:01:15,925 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-22 17:01:15,927 DEBUG [HFileArchiver-11] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/.tmp/data/hbase/namespace/d614d25e3a2cfc04a14be68bbd74978b 2023-05-22 17:01:15,928 DEBUG [HFileArchiver-11] backup.HFileArchiver(153): Directory hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/.tmp/data/hbase/namespace/d614d25e3a2cfc04a14be68bbd74978b empty. 2023-05-22 17:01:15,928 DEBUG [HFileArchiver-11] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/.tmp/data/hbase/namespace/d614d25e3a2cfc04a14be68bbd74978b 2023-05-22 17:01:15,928 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-22 17:01:15,937 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-22 17:01:15,938 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => d614d25e3a2cfc04a14be68bbd74978b, NAME => 'hbase:namespace,,1684774875922.d614d25e3a2cfc04a14be68bbd74978b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/.tmp 2023-05-22 17:01:15,948 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684774875922.d614d25e3a2cfc04a14be68bbd74978b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 17:01:15,948 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing d614d25e3a2cfc04a14be68bbd74978b, disabling compactions & flushes 2023-05-22 17:01:15,948 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684774875922.d614d25e3a2cfc04a14be68bbd74978b. 2023-05-22 17:01:15,948 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684774875922.d614d25e3a2cfc04a14be68bbd74978b. 2023-05-22 17:01:15,948 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684774875922.d614d25e3a2cfc04a14be68bbd74978b. after waiting 0 ms 2023-05-22 17:01:15,948 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684774875922.d614d25e3a2cfc04a14be68bbd74978b. 2023-05-22 17:01:15,948 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684774875922.d614d25e3a2cfc04a14be68bbd74978b. 2023-05-22 17:01:15,948 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for d614d25e3a2cfc04a14be68bbd74978b: 2023-05-22 17:01:15,950 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-22 17:01:15,951 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1684774875922.d614d25e3a2cfc04a14be68bbd74978b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684774875951"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684774875951"}]},"ts":"1684774875951"} 2023-05-22 17:01:15,953 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-22 17:01:15,954 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-22 17:01:15,954 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684774875954"}]},"ts":"1684774875954"} 2023-05-22 17:01:15,955 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-22 17:01:15,965 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=d614d25e3a2cfc04a14be68bbd74978b, ASSIGN}] 2023-05-22 17:01:15,967 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=d614d25e3a2cfc04a14be68bbd74978b, ASSIGN 2023-05-22 17:01:15,967 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=d614d25e3a2cfc04a14be68bbd74978b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38403,1684774875390; forceNewPlan=false, retain=false 2023-05-22 17:01:16,119 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=d614d25e3a2cfc04a14be68bbd74978b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38403,1684774875390 2023-05-22 17:01:16,119 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1684774875922.d614d25e3a2cfc04a14be68bbd74978b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684774876119"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1684774876119"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684774876119"}]},"ts":"1684774876119"} 2023-05-22 17:01:16,121 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure d614d25e3a2cfc04a14be68bbd74978b, server=jenkins-hbase4.apache.org,38403,1684774875390}] 2023-05-22 17:01:16,276 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1684774875922.d614d25e3a2cfc04a14be68bbd74978b. 2023-05-22 17:01:16,276 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d614d25e3a2cfc04a14be68bbd74978b, NAME => 'hbase:namespace,,1684774875922.d614d25e3a2cfc04a14be68bbd74978b.', STARTKEY => '', ENDKEY => ''} 2023-05-22 17:01:16,276 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace d614d25e3a2cfc04a14be68bbd74978b 2023-05-22 17:01:16,277 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684774875922.d614d25e3a2cfc04a14be68bbd74978b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-22 17:01:16,277 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d614d25e3a2cfc04a14be68bbd74978b 2023-05-22 17:01:16,277 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d614d25e3a2cfc04a14be68bbd74978b 2023-05-22 17:01:16,278 INFO [StoreOpener-d614d25e3a2cfc04a14be68bbd74978b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region d614d25e3a2cfc04a14be68bbd74978b 2023-05-22 17:01:16,279 DEBUG [StoreOpener-d614d25e3a2cfc04a14be68bbd74978b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/namespace/d614d25e3a2cfc04a14be68bbd74978b/info 2023-05-22 17:01:16,279 DEBUG [StoreOpener-d614d25e3a2cfc04a14be68bbd74978b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/namespace/d614d25e3a2cfc04a14be68bbd74978b/info 2023-05-22 17:01:16,279 INFO [StoreOpener-d614d25e3a2cfc04a14be68bbd74978b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d614d25e3a2cfc04a14be68bbd74978b columnFamilyName info 2023-05-22 17:01:16,280 INFO [StoreOpener-d614d25e3a2cfc04a14be68bbd74978b-1] regionserver.HStore(310): Store=d614d25e3a2cfc04a14be68bbd74978b/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-22 17:01:16,281 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/namespace/d614d25e3a2cfc04a14be68bbd74978b 2023-05-22 17:01:16,281 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/namespace/d614d25e3a2cfc04a14be68bbd74978b 2023-05-22 17:01:16,284 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d614d25e3a2cfc04a14be68bbd74978b 2023-05-22 17:01:16,286 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/namespace/d614d25e3a2cfc04a14be68bbd74978b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-22 17:01:16,286 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d614d25e3a2cfc04a14be68bbd74978b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=883484, jitterRate=0.12340816855430603}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-22 17:01:16,286 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d614d25e3a2cfc04a14be68bbd74978b: 2023-05-22 17:01:16,288 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1684774875922.d614d25e3a2cfc04a14be68bbd74978b., pid=6, masterSystemTime=1684774876273 2023-05-22 17:01:16,290 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1684774875922.d614d25e3a2cfc04a14be68bbd74978b. 2023-05-22 17:01:16,291 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1684774875922.d614d25e3a2cfc04a14be68bbd74978b. 2023-05-22 17:01:16,291 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=d614d25e3a2cfc04a14be68bbd74978b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38403,1684774875390 2023-05-22 17:01:16,291 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1684774875922.d614d25e3a2cfc04a14be68bbd74978b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684774876291"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1684774876291"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684774876291"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684774876291"}]},"ts":"1684774876291"} 2023-05-22 17:01:16,295 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-22 17:01:16,295 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure d614d25e3a2cfc04a14be68bbd74978b, server=jenkins-hbase4.apache.org,38403,1684774875390 in 172 msec 2023-05-22 17:01:16,297 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-22 17:01:16,297 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=d614d25e3a2cfc04a14be68bbd74978b, ASSIGN in 332 msec 2023-05-22 17:01:16,299 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-22 17:01:16,299 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684774876299"}]},"ts":"1684774876299"} 2023-05-22 17:01:16,300 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-22 17:01:16,303 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-22 17:01:16,304 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 381 msec 2023-05-22 17:01:16,324 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-22 17:01:16,325 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-22 17:01:16,325 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 17:01:16,329 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-22 17:01:16,336 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-22 17:01:16,342 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-05-22 17:01:16,351 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-22 17:01:16,358 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-22 17:01:16,367 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 14 msec 2023-05-22 17:01:16,375 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-22 17:01:16,379 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-22 17:01:16,379 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.960sec 2023-05-22 17:01:16,380 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-22 17:01:16,380 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-22 17:01:16,380 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-22 17:01:16,380 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41393,1684774875352-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-22 17:01:16,380 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41393,1684774875352-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-22 17:01:16,385 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-22 17:01:16,408 DEBUG [Listener at localhost/38851] zookeeper.ReadOnlyZKClient(139): Connect 0x3639086c to 127.0.0.1:52365 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-22 17:01:16,414 DEBUG [Listener at localhost/38851] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@25b43bb9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-22 17:01:16,418 DEBUG [hconnection-0x747db2b9-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-22 17:01:16,421 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56506, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-22 17:01:16,423 INFO [Listener at localhost/38851] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,41393,1684774875352 2023-05-22 17:01:16,423 INFO [Listener at localhost/38851] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-22 17:01:16,427 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-22 17:01:16,427 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 17:01:16,427 INFO [Listener at localhost/38851] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-22 17:01:16,428 INFO [Listener at localhost/38851] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-22 17:01:16,429 INFO [Listener at localhost/38851] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=test.com%2C8080%2C1, suffix=, logDir=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/WALs/test.com,8080,1, archiveDir=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/oldWALs, maxLogs=32 2023-05-22 17:01:16,436 INFO [Listener at localhost/38851] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/WALs/test.com,8080,1/test.com%2C8080%2C1.1684774876430 2023-05-22 17:01:16,436 DEBUG [Listener at localhost/38851] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44369,DS-a35874dd-8459-4c8b-8ee1-1cfec134530a,DISK], DatanodeInfoWithStorage[127.0.0.1:44959,DS-f456166b-b549-401d-8457-03c62b7ccded,DISK]] 2023-05-22 17:01:16,441 INFO [Listener at localhost/38851] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/WALs/test.com,8080,1/test.com%2C8080%2C1.1684774876430 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/WALs/test.com,8080,1/test.com%2C8080%2C1.1684774876436 2023-05-22 17:01:16,441 DEBUG [Listener at localhost/38851] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44369,DS-a35874dd-8459-4c8b-8ee1-1cfec134530a,DISK], DatanodeInfoWithStorage[127.0.0.1:44959,DS-f456166b-b549-401d-8457-03c62b7ccded,DISK]] 2023-05-22 17:01:16,441 DEBUG [Listener at localhost/38851] wal.AbstractFSWAL(716): hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/WALs/test.com,8080,1/test.com%2C8080%2C1.1684774876430 is not closed yet, will try archiving it next time 2023-05-22 17:01:16,442 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/WALs/test.com,8080,1 2023-05-22 17:01:16,449 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/WALs/test.com,8080,1/test.com%2C8080%2C1.1684774876430 to hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/oldWALs/test.com%2C8080%2C1.1684774876430 2023-05-22 17:01:16,451 DEBUG [Listener at localhost/38851] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/oldWALs 2023-05-22 17:01:16,451 INFO [Listener at localhost/38851] wal.AbstractFSWAL(1031): Closed WAL: FSHLog test.com%2C8080%2C1:(num 1684774876436) 2023-05-22 17:01:16,451 INFO [Listener at localhost/38851] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-22 17:01:16,451 DEBUG [Listener at localhost/38851] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3639086c to 127.0.0.1:52365 2023-05-22 17:01:16,451 DEBUG [Listener at localhost/38851] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 17:01:16,452 DEBUG [Listener at localhost/38851] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-22 17:01:16,452 DEBUG [Listener at localhost/38851] util.JVMClusterUtil(257): Found active master hash=388713160, stopped=false 2023-05-22 17:01:16,452 INFO [Listener at localhost/38851] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,41393,1684774875352 2023-05-22 17:01:16,454 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-22 17:01:16,454 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): regionserver:38403-0x10053d6f3cc0001, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-22 17:01:16,454 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 17:01:16,454 INFO [Listener at localhost/38851] procedure2.ProcedureExecutor(629): Stopping 2023-05-22 17:01:16,455 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 17:01:16,456 DEBUG [Listener at localhost/38851] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6ec42f1e to 127.0.0.1:52365 2023-05-22 17:01:16,456 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38403-0x10053d6f3cc0001, quorum=127.0.0.1:52365, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-22 17:01:16,456 DEBUG [Listener at localhost/38851] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 17:01:16,456 INFO [Listener at localhost/38851] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,38403,1684774875390' ***** 2023-05-22 17:01:16,456 INFO [Listener at localhost/38851] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-22 17:01:16,456 INFO [RS:0;jenkins-hbase4:38403] regionserver.HeapMemoryManager(220): Stopping 2023-05-22 17:01:16,456 INFO [RS:0;jenkins-hbase4:38403] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-22 17:01:16,456 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-22 17:01:16,456 INFO [RS:0;jenkins-hbase4:38403] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-22 17:01:16,457 INFO [RS:0;jenkins-hbase4:38403] regionserver.HRegionServer(3303): Received CLOSE for d614d25e3a2cfc04a14be68bbd74978b 2023-05-22 17:01:16,457 INFO [RS:0;jenkins-hbase4:38403] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38403,1684774875390 2023-05-22 17:01:16,457 DEBUG [RS:0;jenkins-hbase4:38403] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x44218bad to 127.0.0.1:52365 2023-05-22 17:01:16,457 DEBUG [RS:0;jenkins-hbase4:38403] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 17:01:16,457 INFO [RS:0;jenkins-hbase4:38403] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-22 17:01:16,457 INFO [RS:0;jenkins-hbase4:38403] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-22 17:01:16,457 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d614d25e3a2cfc04a14be68bbd74978b, disabling compactions & flushes 2023-05-22 17:01:16,457 INFO [RS:0;jenkins-hbase4:38403] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-22 17:01:16,457 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684774875922.d614d25e3a2cfc04a14be68bbd74978b. 2023-05-22 17:01:16,458 INFO [RS:0;jenkins-hbase4:38403] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-22 17:01:16,458 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684774875922.d614d25e3a2cfc04a14be68bbd74978b. 2023-05-22 17:01:16,458 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684774875922.d614d25e3a2cfc04a14be68bbd74978b. after waiting 0 ms 2023-05-22 17:01:16,458 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684774875922.d614d25e3a2cfc04a14be68bbd74978b. 2023-05-22 17:01:16,458 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing d614d25e3a2cfc04a14be68bbd74978b 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-22 17:01:16,458 INFO [RS:0;jenkins-hbase4:38403] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-05-22 17:01:16,458 DEBUG [RS:0;jenkins-hbase4:38403] regionserver.HRegionServer(1478): Online Regions={d614d25e3a2cfc04a14be68bbd74978b=hbase:namespace,,1684774875922.d614d25e3a2cfc04a14be68bbd74978b., 1588230740=hbase:meta,,1.1588230740} 2023-05-22 17:01:16,458 DEBUG [RS:0;jenkins-hbase4:38403] regionserver.HRegionServer(1504): Waiting on 1588230740, d614d25e3a2cfc04a14be68bbd74978b 2023-05-22 17:01:16,458 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-22 17:01:16,458 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-22 17:01:16,458 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-22 17:01:16,458 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-22 17:01:16,458 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-22 17:01:16,458 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=1.26 KB heapSize=2.89 KB 2023-05-22 17:01:16,474 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/namespace/d614d25e3a2cfc04a14be68bbd74978b/.tmp/info/ffb94140a5ed494fab96f2038b43493d 2023-05-22 17:01:16,474 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.17 KB at sequenceid=9 (bloomFilter=false), to=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/1588230740/.tmp/info/317ce0850fae43dc826afcdc960b7955 2023-05-22 17:01:16,480 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/namespace/d614d25e3a2cfc04a14be68bbd74978b/.tmp/info/ffb94140a5ed494fab96f2038b43493d as hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/namespace/d614d25e3a2cfc04a14be68bbd74978b/info/ffb94140a5ed494fab96f2038b43493d 2023-05-22 17:01:16,485 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/namespace/d614d25e3a2cfc04a14be68bbd74978b/info/ffb94140a5ed494fab96f2038b43493d, entries=2, sequenceid=6, filesize=4.8 K 2023-05-22 17:01:16,489 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for d614d25e3a2cfc04a14be68bbd74978b in 31ms, sequenceid=6, compaction requested=false 2023-05-22 17:01:16,489 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-22 17:01:16,493 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=94 B at sequenceid=9 (bloomFilter=false), to=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/1588230740/.tmp/table/b1113d61666242d98b0d4d1e30c1b867 2023-05-22 17:01:16,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/namespace/d614d25e3a2cfc04a14be68bbd74978b/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-22 17:01:16,496 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684774875922.d614d25e3a2cfc04a14be68bbd74978b. 2023-05-22 17:01:16,496 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d614d25e3a2cfc04a14be68bbd74978b: 2023-05-22 17:01:16,496 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1684774875922.d614d25e3a2cfc04a14be68bbd74978b. 2023-05-22 17:01:16,498 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/1588230740/.tmp/info/317ce0850fae43dc826afcdc960b7955 as hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/1588230740/info/317ce0850fae43dc826afcdc960b7955 2023-05-22 17:01:16,503 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/1588230740/info/317ce0850fae43dc826afcdc960b7955, entries=10, sequenceid=9, filesize=5.9 K 2023-05-22 17:01:16,504 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/1588230740/.tmp/table/b1113d61666242d98b0d4d1e30c1b867 as hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/1588230740/table/b1113d61666242d98b0d4d1e30c1b867 2023-05-22 17:01:16,508 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/1588230740/table/b1113d61666242d98b0d4d1e30c1b867, entries=2, sequenceid=9, filesize=4.7 K 2023-05-22 17:01:16,509 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.26 KB/1290, heapSize ~2.61 KB/2672, currentSize=0 B/0 for 1588230740 in 51ms, sequenceid=9, compaction requested=false 2023-05-22 17:01:16,509 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-22 17:01:16,516 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/data/hbase/meta/1588230740/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-05-22 17:01:16,517 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-22 17:01:16,517 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-22 17:01:16,517 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-22 17:01:16,517 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-22 17:01:16,642 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-05-22 17:01:16,643 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-05-22 17:01:16,658 INFO [RS:0;jenkins-hbase4:38403] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38403,1684774875390; all regions closed. 2023-05-22 17:01:16,659 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/WALs/jenkins-hbase4.apache.org,38403,1684774875390 2023-05-22 17:01:16,664 DEBUG [RS:0;jenkins-hbase4:38403] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/oldWALs 2023-05-22 17:01:16,664 INFO [RS:0;jenkins-hbase4:38403] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C38403%2C1684774875390.meta:.meta(num 1684774875870) 2023-05-22 17:01:16,664 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/WALs/jenkins-hbase4.apache.org,38403,1684774875390 2023-05-22 17:01:16,668 DEBUG [RS:0;jenkins-hbase4:38403] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/oldWALs 2023-05-22 17:01:16,668 INFO [RS:0;jenkins-hbase4:38403] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C38403%2C1684774875390:(num 1684774875766) 2023-05-22 17:01:16,668 DEBUG [RS:0;jenkins-hbase4:38403] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 17:01:16,668 INFO [RS:0;jenkins-hbase4:38403] regionserver.LeaseManager(133): Closed leases 2023-05-22 17:01:16,668 INFO [RS:0;jenkins-hbase4:38403] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-22 17:01:16,668 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-22 17:01:16,669 INFO [RS:0;jenkins-hbase4:38403] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38403 2023-05-22 17:01:16,672 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-22 17:01:16,672 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): regionserver:38403-0x10053d6f3cc0001, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38403,1684774875390 2023-05-22 17:01:16,672 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): regionserver:38403-0x10053d6f3cc0001, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-22 17:01:16,673 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38403,1684774875390] 2023-05-22 17:01:16,673 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38403,1684774875390; numProcessing=1 2023-05-22 17:01:16,674 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38403,1684774875390 already deleted, retry=false 2023-05-22 17:01:16,674 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38403,1684774875390 expired; onlineServers=0 2023-05-22 17:01:16,675 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,41393,1684774875352' ***** 2023-05-22 17:01:16,675 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-22 17:01:16,675 DEBUG [M:0;jenkins-hbase4:41393] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1314ccf5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-22 17:01:16,675 INFO [M:0;jenkins-hbase4:41393] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41393,1684774875352 2023-05-22 17:01:16,675 INFO [M:0;jenkins-hbase4:41393] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41393,1684774875352; all regions closed. 2023-05-22 17:01:16,675 DEBUG [M:0;jenkins-hbase4:41393] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-22 17:01:16,675 DEBUG [M:0;jenkins-hbase4:41393] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-22 17:01:16,675 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-22 17:01:16,675 DEBUG [M:0;jenkins-hbase4:41393] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-22 17:01:16,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1684774875521] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1684774875521,5,FailOnTimeoutGroup] 2023-05-22 17:01:16,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1684774875521] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1684774875521,5,FailOnTimeoutGroup] 2023-05-22 17:01:16,676 INFO [M:0;jenkins-hbase4:41393] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-22 17:01:16,676 INFO [M:0;jenkins-hbase4:41393] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-22 17:01:16,676 INFO [M:0;jenkins-hbase4:41393] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-22 17:01:16,677 DEBUG [M:0;jenkins-hbase4:41393] master.HMaster(1512): Stopping service threads 2023-05-22 17:01:16,677 INFO [M:0;jenkins-hbase4:41393] procedure2.RemoteProcedureDispatcher(118): Stopping procedure remote dispatcher 2023-05-22 17:01:16,677 ERROR [M:0;jenkins-hbase4:41393] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-11,5,PEWorkerGroup] 2023-05-22 17:01:16,677 INFO [M:0;jenkins-hbase4:41393] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-22 17:01:16,677 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-22 17:01:16,677 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-22 17:01:16,678 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-22 17:01:16,678 DEBUG [M:0;jenkins-hbase4:41393] zookeeper.ZKUtil(398): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-22 17:01:16,678 WARN [M:0;jenkins-hbase4:41393] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-22 17:01:16,678 INFO [M:0;jenkins-hbase4:41393] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-22 17:01:16,678 INFO [M:0;jenkins-hbase4:41393] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-22 17:01:16,678 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-22 17:01:16,679 DEBUG [M:0;jenkins-hbase4:41393] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-22 17:01:16,679 INFO [M:0;jenkins-hbase4:41393] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 17:01:16,679 DEBUG [M:0;jenkins-hbase4:41393] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 17:01:16,679 DEBUG [M:0;jenkins-hbase4:41393] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-22 17:01:16,679 DEBUG [M:0;jenkins-hbase4:41393] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 17:01:16,679 INFO [M:0;jenkins-hbase4:41393] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=24.07 KB heapSize=29.55 KB 2023-05-22 17:01:16,687 INFO [M:0;jenkins-hbase4:41393] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.07 KB at sequenceid=66 (bloomFilter=true), to=hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/0813114852ee4b7087b2dec70a2d86d9 2023-05-22 17:01:16,691 DEBUG [M:0;jenkins-hbase4:41393] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/0813114852ee4b7087b2dec70a2d86d9 as hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/0813114852ee4b7087b2dec70a2d86d9 2023-05-22 17:01:16,695 INFO [M:0;jenkins-hbase4:41393] regionserver.HStore(1080): Added hdfs://localhost:44785/user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/0813114852ee4b7087b2dec70a2d86d9, entries=8, sequenceid=66, filesize=6.3 K 2023-05-22 17:01:16,697 INFO [M:0;jenkins-hbase4:41393] regionserver.HRegion(2948): Finished flush of dataSize ~24.07 KB/24646, heapSize ~29.54 KB/30248, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 18ms, sequenceid=66, compaction requested=false 2023-05-22 17:01:16,698 INFO [M:0;jenkins-hbase4:41393] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-22 17:01:16,698 DEBUG [M:0;jenkins-hbase4:41393] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-22 17:01:16,699 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/f43df0bb-54d0-6a92-f504-7ff3825b7577/MasterData/WALs/jenkins-hbase4.apache.org,41393,1684774875352 2023-05-22 17:01:16,701 INFO [M:0;jenkins-hbase4:41393] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-22 17:01:16,701 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-22 17:01:16,702 INFO [M:0;jenkins-hbase4:41393] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41393 2023-05-22 17:01:16,705 DEBUG [M:0;jenkins-hbase4:41393] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,41393,1684774875352 already deleted, retry=false 2023-05-22 17:01:16,855 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 17:01:16,855 INFO [M:0;jenkins-hbase4:41393] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41393,1684774875352; zookeeper connection closed. 2023-05-22 17:01:16,855 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): master:41393-0x10053d6f3cc0000, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 17:01:16,955 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): regionserver:38403-0x10053d6f3cc0001, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 17:01:16,955 INFO [RS:0;jenkins-hbase4:38403] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38403,1684774875390; zookeeper connection closed. 2023-05-22 17:01:16,955 DEBUG [Listener at localhost/38851-EventThread] zookeeper.ZKWatcher(600): regionserver:38403-0x10053d6f3cc0001, quorum=127.0.0.1:52365, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-22 17:01:16,956 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7b6db1c7] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7b6db1c7 2023-05-22 17:01:16,956 INFO [Listener at localhost/38851] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-22 17:01:16,956 WARN [Listener at localhost/38851] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-22 17:01:16,960 INFO [Listener at localhost/38851] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 17:01:17,063 WARN [BP-56436296-172.31.14.131-1684774874789 heartbeating to localhost/127.0.0.1:44785] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-22 17:01:17,063 WARN [BP-56436296-172.31.14.131-1684774874789 heartbeating to localhost/127.0.0.1:44785] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-56436296-172.31.14.131-1684774874789 (Datanode Uuid 0b9dd38d-3979-4d49-96fe-9313c0ec6368) service to localhost/127.0.0.1:44785 2023-05-22 17:01:17,064 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/cluster_1e506279-1d03-9055-fae0-4252d4dc6d7e/dfs/data/data3/current/BP-56436296-172.31.14.131-1684774874789] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 17:01:17,064 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/cluster_1e506279-1d03-9055-fae0-4252d4dc6d7e/dfs/data/data4/current/BP-56436296-172.31.14.131-1684774874789] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 17:01:17,065 WARN [Listener at localhost/38851] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-22 17:01:17,068 INFO [Listener at localhost/38851] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 17:01:17,170 WARN [BP-56436296-172.31.14.131-1684774874789 heartbeating to localhost/127.0.0.1:44785] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-22 17:01:17,170 WARN [BP-56436296-172.31.14.131-1684774874789 heartbeating to localhost/127.0.0.1:44785] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-56436296-172.31.14.131-1684774874789 (Datanode Uuid e2df3fc5-40c3-41b9-8e26-75e1ef20fba0) service to localhost/127.0.0.1:44785 2023-05-22 17:01:17,171 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/cluster_1e506279-1d03-9055-fae0-4252d4dc6d7e/dfs/data/data1/current/BP-56436296-172.31.14.131-1684774874789] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 17:01:17,171 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949e3de6-cac0-eb7b-8636-e3571eceb876/cluster_1e506279-1d03-9055-fae0-4252d4dc6d7e/dfs/data/data2/current/BP-56436296-172.31.14.131-1684774874789] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-22 17:01:17,180 INFO [Listener at localhost/38851] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-22 17:01:17,290 INFO [Listener at localhost/38851] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-22 17:01:17,301 INFO [Listener at localhost/38851] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-22 17:01:17,313 INFO [Listener at localhost/38851] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=129 (was 105) - Thread LEAK? -, OpenFileDescriptor=562 (was 538) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=51 (was 51), ProcessCount=168 (was 168), AvailableMemoryMB=4775 (was 4742) - AvailableMemoryMB LEAK? -