2023-06-02 14:55:59,205 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38 2023-06-02 14:55:59,222 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.regionserver.wal.TestLogRolling timeout: 13 mins 2023-06-02 14:55:59,259 INFO [Time-limited test] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=10, OpenFileDescriptor=263, MaxFileDescriptor=60000, SystemLoadAverage=184, ProcessCount=170, AvailableMemoryMB=2242 2023-06-02 14:55:59,265 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-02 14:55:59,266 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/cluster_ba83aa0c-5dd3-8da7-41f4-59bf75d412fd, deleteOnExit=true 2023-06-02 14:55:59,266 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-02 14:55:59,267 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/test.cache.data in system properties and HBase conf 2023-06-02 14:55:59,267 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/hadoop.tmp.dir in system properties and HBase conf 2023-06-02 14:55:59,267 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/hadoop.log.dir in system properties and HBase conf 2023-06-02 14:55:59,268 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-02 14:55:59,269 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-02 14:55:59,269 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-02 14:55:59,381 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-06-02 14:55:59,764 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-02 14:55:59,767 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-02 14:55:59,768 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-02 14:55:59,768 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-02 14:55:59,768 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-02 14:55:59,769 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-02 14:55:59,769 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-02 14:55:59,769 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-02 14:55:59,770 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-02 14:55:59,770 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-02 14:55:59,770 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/nfs.dump.dir in system properties and HBase conf 2023-06-02 14:55:59,770 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/java.io.tmpdir in system properties and HBase conf 2023-06-02 14:55:59,771 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-02 14:55:59,771 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-02 14:55:59,771 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-02 14:56:00,253 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-02 14:56:00,268 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-02 14:56:00,272 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-02 14:56:00,553 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-06-02 14:56:00,715 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-06-02 14:56:00,729 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 14:56:00,763 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 14:56:00,821 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/java.io.tmpdir/Jetty_localhost_35475_hdfs____.hwqsbq/webapp 2023-06-02 14:56:00,954 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35475 2023-06-02 14:56:00,961 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-02 14:56:00,964 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-02 14:56:00,964 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-02 14:56:01,375 WARN [Listener at localhost/42517] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 14:56:01,456 WARN [Listener at localhost/42517] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-02 14:56:01,478 WARN [Listener at localhost/42517] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 14:56:01,486 INFO [Listener at localhost/42517] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 14:56:01,492 INFO [Listener at localhost/42517] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/java.io.tmpdir/Jetty_localhost_38091_datanode____.etsqmu/webapp 2023-06-02 14:56:01,604 INFO [Listener at localhost/42517] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38091 2023-06-02 14:56:01,919 WARN [Listener at localhost/34065] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 14:56:01,930 WARN [Listener at localhost/34065] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-02 14:56:01,934 WARN [Listener at localhost/34065] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 14:56:01,936 INFO [Listener at localhost/34065] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 14:56:01,941 INFO [Listener at localhost/34065] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/java.io.tmpdir/Jetty_localhost_36555_datanode____.1b2jt3/webapp 2023-06-02 14:56:02,054 INFO [Listener at localhost/34065] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36555 2023-06-02 14:56:02,070 WARN [Listener at localhost/40969] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 14:56:02,416 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x219a03ad53da835e: Processing first storage report for DS-960b30de-cf30-47cd-9def-035dd013f5b3 from datanode bee256ca-c51d-4f0d-87f0-c5e120e1a39a 2023-06-02 14:56:02,418 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x219a03ad53da835e: from storage DS-960b30de-cf30-47cd-9def-035dd013f5b3 node DatanodeRegistration(127.0.0.1:43517, datanodeUuid=bee256ca-c51d-4f0d-87f0-c5e120e1a39a, infoPort=35077, infoSecurePort=0, ipcPort=40969, storageInfo=lv=-57;cid=testClusterID;nsid=2027641953;c=1685717760344), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-06-02 14:56:02,419 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xaf779cb5cdd2c132: Processing first storage report for DS-099657df-3c62-454f-b758-40bbc385c6a6 from datanode 24f6b340-d19e-440a-bf0b-5eae755dd490 2023-06-02 14:56:02,419 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xaf779cb5cdd2c132: from storage DS-099657df-3c62-454f-b758-40bbc385c6a6 node DatanodeRegistration(127.0.0.1:36973, datanodeUuid=24f6b340-d19e-440a-bf0b-5eae755dd490, infoPort=38723, infoSecurePort=0, ipcPort=34065, storageInfo=lv=-57;cid=testClusterID;nsid=2027641953;c=1685717760344), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:56:02,419 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x219a03ad53da835e: Processing first storage report for DS-5106a2c9-6860-4f4c-92b4-2c5fda8791ea from datanode bee256ca-c51d-4f0d-87f0-c5e120e1a39a 2023-06-02 14:56:02,419 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x219a03ad53da835e: from storage DS-5106a2c9-6860-4f4c-92b4-2c5fda8791ea node DatanodeRegistration(127.0.0.1:43517, datanodeUuid=bee256ca-c51d-4f0d-87f0-c5e120e1a39a, infoPort=35077, infoSecurePort=0, ipcPort=40969, storageInfo=lv=-57;cid=testClusterID;nsid=2027641953;c=1685717760344), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:56:02,419 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xaf779cb5cdd2c132: Processing first storage report for DS-e9aca378-a545-4f12-805e-656f31e63ad8 from datanode 24f6b340-d19e-440a-bf0b-5eae755dd490 2023-06-02 14:56:02,420 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xaf779cb5cdd2c132: from storage DS-e9aca378-a545-4f12-805e-656f31e63ad8 node DatanodeRegistration(127.0.0.1:36973, datanodeUuid=24f6b340-d19e-440a-bf0b-5eae755dd490, infoPort=38723, infoSecurePort=0, ipcPort=34065, storageInfo=lv=-57;cid=testClusterID;nsid=2027641953;c=1685717760344), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-02 14:56:02,520 DEBUG [Listener at localhost/40969] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38 2023-06-02 14:56:02,605 INFO [Listener at localhost/40969] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/cluster_ba83aa0c-5dd3-8da7-41f4-59bf75d412fd/zookeeper_0, clientPort=51661, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/cluster_ba83aa0c-5dd3-8da7-41f4-59bf75d412fd/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/cluster_ba83aa0c-5dd3-8da7-41f4-59bf75d412fd/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-02 14:56:02,622 INFO [Listener at localhost/40969] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=51661 2023-06-02 14:56:02,630 INFO [Listener at localhost/40969] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:56:02,632 INFO [Listener at localhost/40969] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:56:03,286 INFO [Listener at localhost/40969] util.FSUtils(471): Created version file at hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd with version=8 2023-06-02 14:56:03,286 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/hbase-staging 2023-06-02 14:56:03,610 INFO [Listener at localhost/40969] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-06-02 14:56:04,065 INFO [Listener at localhost/40969] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-06-02 14:56:04,096 INFO [Listener at localhost/40969] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 14:56:04,097 INFO [Listener at localhost/40969] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-02 14:56:04,097 INFO [Listener at localhost/40969] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-02 14:56:04,097 INFO [Listener at localhost/40969] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 14:56:04,098 INFO [Listener at localhost/40969] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-02 14:56:04,236 INFO [Listener at localhost/40969] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-02 14:56:04,308 DEBUG [Listener at localhost/40969] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-06-02 14:56:04,402 INFO [Listener at localhost/40969] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43361 2023-06-02 14:56:04,412 INFO [Listener at localhost/40969] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:56:04,414 INFO [Listener at localhost/40969] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:56:04,435 INFO [Listener at localhost/40969] zookeeper.RecoverableZooKeeper(93): Process identifier=master:43361 connecting to ZooKeeper ensemble=127.0.0.1:51661 2023-06-02 14:56:04,482 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:433610x0, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-02 14:56:04,485 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:43361-0x1008c0a451f0000 connected 2023-06-02 14:56:04,513 DEBUG [Listener at localhost/40969] zookeeper.ZKUtil(164): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-02 14:56:04,514 DEBUG [Listener at localhost/40969] zookeeper.ZKUtil(164): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 14:56:04,517 DEBUG [Listener at localhost/40969] zookeeper.ZKUtil(164): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-02 14:56:04,525 DEBUG [Listener at localhost/40969] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43361 2023-06-02 14:56:04,525 DEBUG [Listener at localhost/40969] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43361 2023-06-02 14:56:04,526 DEBUG [Listener at localhost/40969] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43361 2023-06-02 14:56:04,526 DEBUG [Listener at localhost/40969] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43361 2023-06-02 14:56:04,526 DEBUG [Listener at localhost/40969] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43361 2023-06-02 14:56:04,532 INFO [Listener at localhost/40969] master.HMaster(444): hbase.rootdir=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd, hbase.cluster.distributed=false 2023-06-02 14:56:04,597 INFO [Listener at localhost/40969] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-06-02 14:56:04,598 INFO [Listener at localhost/40969] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 14:56:04,598 INFO [Listener at localhost/40969] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-02 14:56:04,598 INFO [Listener at localhost/40969] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-02 14:56:04,598 INFO [Listener at localhost/40969] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 14:56:04,598 INFO [Listener at localhost/40969] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-02 14:56:04,603 INFO [Listener at localhost/40969] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-02 14:56:04,606 INFO [Listener at localhost/40969] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46785 2023-06-02 14:56:04,608 INFO [Listener at localhost/40969] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-02 14:56:04,614 DEBUG [Listener at localhost/40969] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-02 14:56:04,615 INFO [Listener at localhost/40969] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:56:04,617 INFO [Listener at localhost/40969] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:56:04,619 INFO [Listener at localhost/40969] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46785 connecting to ZooKeeper ensemble=127.0.0.1:51661 2023-06-02 14:56:04,622 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): regionserver:467850x0, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-02 14:56:04,623 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46785-0x1008c0a451f0001 connected 2023-06-02 14:56:04,623 DEBUG [Listener at localhost/40969] zookeeper.ZKUtil(164): regionserver:46785-0x1008c0a451f0001, quorum=127.0.0.1:51661, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-02 14:56:04,624 DEBUG [Listener at localhost/40969] zookeeper.ZKUtil(164): regionserver:46785-0x1008c0a451f0001, quorum=127.0.0.1:51661, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 14:56:04,625 DEBUG [Listener at localhost/40969] zookeeper.ZKUtil(164): regionserver:46785-0x1008c0a451f0001, quorum=127.0.0.1:51661, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-02 14:56:04,626 DEBUG [Listener at localhost/40969] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46785 2023-06-02 14:56:04,627 DEBUG [Listener at localhost/40969] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46785 2023-06-02 14:56:04,627 DEBUG [Listener at localhost/40969] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46785 2023-06-02 14:56:04,627 DEBUG [Listener at localhost/40969] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46785 2023-06-02 14:56:04,627 DEBUG [Listener at localhost/40969] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46785 2023-06-02 14:56:04,629 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,43361,1685717763435 2023-06-02 14:56:04,638 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-02 14:56:04,639 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,43361,1685717763435 2023-06-02 14:56:04,657 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-02 14:56:04,657 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): regionserver:46785-0x1008c0a451f0001, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-02 14:56:04,657 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:56:04,659 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-02 14:56:04,659 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-02 14:56:04,660 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,43361,1685717763435 from backup master directory 2023-06-02 14:56:04,664 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,43361,1685717763435 2023-06-02 14:56:04,664 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-02 14:56:04,664 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-02 14:56:04,665 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,43361,1685717763435 2023-06-02 14:56:04,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-06-02 14:56:04,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-06-02 14:56:04,756 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/hbase.id with ID: 00a456b1-2509-4915-93e8-cd0325cc794f 2023-06-02 14:56:04,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:56:04,815 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:56:04,860 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x108ed0ed to 127.0.0.1:51661 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-02 14:56:04,892 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7d829ea1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-02 14:56:04,915 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-02 14:56:04,917 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-02 14:56:04,926 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-02 14:56:04,958 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/MasterData/data/master/store-tmp 2023-06-02 14:56:04,988 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:56:04,988 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-02 14:56:04,988 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:56:04,988 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:56:04,988 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-02 14:56:04,988 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:56:04,989 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:56:04,989 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-02 14:56:04,990 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/MasterData/WALs/jenkins-hbase4.apache.org,43361,1685717763435 2023-06-02 14:56:05,013 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43361%2C1685717763435, suffix=, logDir=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/MasterData/WALs/jenkins-hbase4.apache.org,43361,1685717763435, archiveDir=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/MasterData/oldWALs, maxLogs=10 2023-06-02 14:56:05,034 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.CommonFSUtils$DfsBuilderUtility(753): Could not find replicate method on builder; will not set replicate when creating output stream java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DistributedFileSystem$HdfsDataOutputStreamBuilder.replicate() at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.util.CommonFSUtils$DfsBuilderUtility.(CommonFSUtils.java:750) at org.apache.hadoop.hbase.util.CommonFSUtils.createForWal(CommonFSUtils.java:802) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.initOutput(ProtobufLogWriter.java:102) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:160) at org.apache.hadoop.hbase.wal.FSHLogProvider.createWriter(FSHLogProvider.java:78) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:307) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:881) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:574) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:515) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62) at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:295) at org.apache.hadoop.hbase.master.region.MasterRegion.createWAL(MasterRegion.java:200) at org.apache.hadoop.hbase.master.region.MasterRegion.bootstrap(MasterRegion.java:220) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:348) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:56:05,059 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/MasterData/WALs/jenkins-hbase4.apache.org,43361,1685717763435/jenkins-hbase4.apache.org%2C43361%2C1685717763435.1685717765032 2023-06-02 14:56:05,059 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36973,DS-099657df-3c62-454f-b758-40bbc385c6a6,DISK], DatanodeInfoWithStorage[127.0.0.1:43517,DS-960b30de-cf30-47cd-9def-035dd013f5b3,DISK]] 2023-06-02 14:56:05,060 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-02 14:56:05,060 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:56:05,064 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-02 14:56:05,066 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-02 14:56:05,122 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-02 14:56:05,129 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-02 14:56:05,155 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-02 14:56:05,168 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:56:05,175 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-02 14:56:05,176 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-02 14:56:05,194 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-02 14:56:05,198 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-02 14:56:05,199 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=840404, jitterRate=0.06863009929656982}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-02 14:56:05,199 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-02 14:56:05,201 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-02 14:56:05,224 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-02 14:56:05,224 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-02 14:56:05,227 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-02 14:56:05,229 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-06-02 14:56:05,262 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 33 msec 2023-06-02 14:56:05,262 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-02 14:56:05,288 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-02 14:56:05,294 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-02 14:56:05,320 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-02 14:56:05,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-02 14:56:05,326 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-02 14:56:05,331 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-02 14:56:05,335 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-02 14:56:05,338 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:56:05,339 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-02 14:56:05,340 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-02 14:56:05,352 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-02 14:56:05,358 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-02 14:56:05,358 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): regionserver:46785-0x1008c0a451f0001, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-02 14:56:05,358 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:56:05,359 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,43361,1685717763435, sessionid=0x1008c0a451f0000, setting cluster-up flag (Was=false) 2023-06-02 14:56:05,373 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:56:05,380 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-02 14:56:05,381 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,43361,1685717763435 2023-06-02 14:56:05,386 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:56:05,392 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-02 14:56:05,393 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,43361,1685717763435 2023-06-02 14:56:05,395 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/.hbase-snapshot/.tmp 2023-06-02 14:56:05,432 INFO [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(951): ClusterId : 00a456b1-2509-4915-93e8-cd0325cc794f 2023-06-02 14:56:05,438 DEBUG [RS:0;jenkins-hbase4:46785] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-02 14:56:05,444 DEBUG [RS:0;jenkins-hbase4:46785] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-02 14:56:05,444 DEBUG [RS:0;jenkins-hbase4:46785] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-02 14:56:05,449 DEBUG [RS:0;jenkins-hbase4:46785] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-02 14:56:05,450 DEBUG [RS:0;jenkins-hbase4:46785] zookeeper.ReadOnlyZKClient(139): Connect 0x3d9304b6 to 127.0.0.1:51661 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-02 14:56:05,463 DEBUG [RS:0;jenkins-hbase4:46785] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3354fd74, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-02 14:56:05,464 DEBUG [RS:0;jenkins-hbase4:46785] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@376b6fc6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-02 14:56:05,498 DEBUG [RS:0;jenkins-hbase4:46785] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:46785 2023-06-02 14:56:05,503 INFO [RS:0;jenkins-hbase4:46785] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-02 14:56:05,503 INFO [RS:0;jenkins-hbase4:46785] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-02 14:56:05,503 DEBUG [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1022): About to register with Master. 2023-06-02 14:56:05,507 INFO [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,43361,1685717763435 with isa=jenkins-hbase4.apache.org/172.31.14.131:46785, startcode=1685717764597 2023-06-02 14:56:05,518 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-02 14:56:05,527 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-02 14:56:05,528 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-02 14:56:05,528 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-02 14:56:05,528 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-02 14:56:05,528 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-06-02 14:56:05,528 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:56:05,528 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-02 14:56:05,528 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:56:05,531 DEBUG [RS:0;jenkins-hbase4:46785] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-02 14:56:05,532 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685717795532 2023-06-02 14:56:05,534 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-02 14:56:05,539 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-02 14:56:05,539 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-02 14:56:05,546 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-02 14:56:05,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-02 14:56:05,554 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-02 14:56:05,555 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-02 14:56:05,555 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-02 14:56:05,555 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-02 14:56:05,556 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-02 14:56:05,559 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-02 14:56:05,561 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-02 14:56:05,561 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-02 14:56:05,564 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-02 14:56:05,565 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-02 14:56:05,567 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685717765567,5,FailOnTimeoutGroup] 2023-06-02 14:56:05,570 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685717765567,5,FailOnTimeoutGroup] 2023-06-02 14:56:05,570 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-02 14:56:05,571 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-02 14:56:05,573 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-02 14:56:05,573 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-02 14:56:05,591 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-02 14:56:05,594 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-02 14:56:05,594 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd 2023-06-02 14:56:05,620 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:56:05,624 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-02 14:56:05,628 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/1588230740/info 2023-06-02 14:56:05,629 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-02 14:56:05,630 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:56:05,631 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-02 14:56:05,634 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/1588230740/rep_barrier 2023-06-02 14:56:05,635 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-02 14:56:05,636 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:56:05,636 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-02 14:56:05,639 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/1588230740/table 2023-06-02 14:56:05,639 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-02 14:56:05,640 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:56:05,642 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/1588230740 2023-06-02 14:56:05,643 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/1588230740 2023-06-02 14:56:05,648 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-02 14:56:05,650 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-02 14:56:05,654 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-02 14:56:05,655 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=707079, jitterRate=-0.10090364515781403}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-02 14:56:05,656 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-02 14:56:05,656 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-02 14:56:05,656 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-02 14:56:05,656 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-02 14:56:05,656 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-02 14:56:05,656 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-02 14:56:05,657 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-02 14:56:05,657 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-02 14:56:05,663 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-02 14:56:05,663 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-02 14:56:05,674 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-02 14:56:05,686 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48759, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-06-02 14:56:05,691 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-02 14:56:05,696 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-02 14:56:05,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43361] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46785,1685717764597 2023-06-02 14:56:05,716 DEBUG [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd 2023-06-02 14:56:05,717 DEBUG [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42517 2023-06-02 14:56:05,717 DEBUG [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-02 14:56:05,722 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-02 14:56:05,722 DEBUG [RS:0;jenkins-hbase4:46785] zookeeper.ZKUtil(162): regionserver:46785-0x1008c0a451f0001, quorum=127.0.0.1:51661, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46785,1685717764597 2023-06-02 14:56:05,723 WARN [RS:0;jenkins-hbase4:46785] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-02 14:56:05,723 INFO [RS:0;jenkins-hbase4:46785] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-02 14:56:05,723 DEBUG [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1946): logDir=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/WALs/jenkins-hbase4.apache.org,46785,1685717764597 2023-06-02 14:56:05,725 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46785,1685717764597] 2023-06-02 14:56:05,732 DEBUG [RS:0;jenkins-hbase4:46785] zookeeper.ZKUtil(162): regionserver:46785-0x1008c0a451f0001, quorum=127.0.0.1:51661, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46785,1685717764597 2023-06-02 14:56:05,742 DEBUG [RS:0;jenkins-hbase4:46785] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-02 14:56:05,751 INFO [RS:0;jenkins-hbase4:46785] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-02 14:56:05,771 INFO [RS:0;jenkins-hbase4:46785] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-02 14:56:05,775 INFO [RS:0;jenkins-hbase4:46785] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-02 14:56:05,775 INFO [RS:0;jenkins-hbase4:46785] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-02 14:56:05,776 INFO [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-02 14:56:05,784 INFO [RS:0;jenkins-hbase4:46785] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-02 14:56:05,784 DEBUG [RS:0;jenkins-hbase4:46785] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:56:05,784 DEBUG [RS:0;jenkins-hbase4:46785] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:56:05,784 DEBUG [RS:0;jenkins-hbase4:46785] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:56:05,785 DEBUG [RS:0;jenkins-hbase4:46785] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:56:05,785 DEBUG [RS:0;jenkins-hbase4:46785] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:56:05,785 DEBUG [RS:0;jenkins-hbase4:46785] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-02 14:56:05,785 DEBUG [RS:0;jenkins-hbase4:46785] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:56:05,785 DEBUG [RS:0;jenkins-hbase4:46785] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:56:05,785 DEBUG [RS:0;jenkins-hbase4:46785] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:56:05,786 DEBUG [RS:0;jenkins-hbase4:46785] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:56:05,787 INFO [RS:0;jenkins-hbase4:46785] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-02 14:56:05,787 INFO [RS:0;jenkins-hbase4:46785] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-02 14:56:05,787 INFO [RS:0;jenkins-hbase4:46785] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-02 14:56:05,804 INFO [RS:0;jenkins-hbase4:46785] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-02 14:56:05,806 INFO [RS:0;jenkins-hbase4:46785] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46785,1685717764597-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-02 14:56:05,822 INFO [RS:0;jenkins-hbase4:46785] regionserver.Replication(203): jenkins-hbase4.apache.org,46785,1685717764597 started 2023-06-02 14:56:05,822 INFO [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46785,1685717764597, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46785, sessionid=0x1008c0a451f0001 2023-06-02 14:56:05,823 DEBUG [RS:0;jenkins-hbase4:46785] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-02 14:56:05,823 DEBUG [RS:0;jenkins-hbase4:46785] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46785,1685717764597 2023-06-02 14:56:05,823 DEBUG [RS:0;jenkins-hbase4:46785] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46785,1685717764597' 2023-06-02 14:56:05,823 DEBUG [RS:0;jenkins-hbase4:46785] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-02 14:56:05,824 DEBUG [RS:0;jenkins-hbase4:46785] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-02 14:56:05,824 DEBUG [RS:0;jenkins-hbase4:46785] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-02 14:56:05,824 DEBUG [RS:0;jenkins-hbase4:46785] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-02 14:56:05,825 DEBUG [RS:0;jenkins-hbase4:46785] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46785,1685717764597 2023-06-02 14:56:05,825 DEBUG [RS:0;jenkins-hbase4:46785] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46785,1685717764597' 2023-06-02 14:56:05,825 DEBUG [RS:0;jenkins-hbase4:46785] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-02 14:56:05,825 DEBUG [RS:0;jenkins-hbase4:46785] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-02 14:56:05,825 DEBUG [RS:0;jenkins-hbase4:46785] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-02 14:56:05,826 INFO [RS:0;jenkins-hbase4:46785] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-02 14:56:05,826 INFO [RS:0;jenkins-hbase4:46785] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-02 14:56:05,848 DEBUG [jenkins-hbase4:43361] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-02 14:56:05,851 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46785,1685717764597, state=OPENING 2023-06-02 14:56:05,858 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-02 14:56:05,860 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:56:05,861 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-02 14:56:05,865 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46785,1685717764597}] 2023-06-02 14:56:05,938 INFO [RS:0;jenkins-hbase4:46785] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46785%2C1685717764597, suffix=, logDir=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/WALs/jenkins-hbase4.apache.org,46785,1685717764597, archiveDir=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/oldWALs, maxLogs=32 2023-06-02 14:56:05,957 INFO [RS:0;jenkins-hbase4:46785] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/WALs/jenkins-hbase4.apache.org,46785,1685717764597/jenkins-hbase4.apache.org%2C46785%2C1685717764597.1685717765941 2023-06-02 14:56:05,958 DEBUG [RS:0;jenkins-hbase4:46785] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43517,DS-960b30de-cf30-47cd-9def-035dd013f5b3,DISK], DatanodeInfoWithStorage[127.0.0.1:36973,DS-099657df-3c62-454f-b758-40bbc385c6a6,DISK]] 2023-06-02 14:56:06,047 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46785,1685717764597 2023-06-02 14:56:06,051 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-02 14:56:06,054 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51338, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-02 14:56:06,067 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-02 14:56:06,068 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-02 14:56:06,072 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46785%2C1685717764597.meta, suffix=.meta, logDir=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/WALs/jenkins-hbase4.apache.org,46785,1685717764597, archiveDir=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/oldWALs, maxLogs=32 2023-06-02 14:56:06,087 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/WALs/jenkins-hbase4.apache.org,46785,1685717764597/jenkins-hbase4.apache.org%2C46785%2C1685717764597.meta.1685717766073.meta 2023-06-02 14:56:06,087 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36973,DS-099657df-3c62-454f-b758-40bbc385c6a6,DISK], DatanodeInfoWithStorage[127.0.0.1:43517,DS-960b30de-cf30-47cd-9def-035dd013f5b3,DISK]] 2023-06-02 14:56:06,088 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-02 14:56:06,090 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-02 14:56:06,108 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-02 14:56:06,113 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-02 14:56:06,120 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-02 14:56:06,120 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:56:06,120 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-02 14:56:06,120 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-02 14:56:06,123 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-02 14:56:06,126 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/1588230740/info 2023-06-02 14:56:06,126 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/1588230740/info 2023-06-02 14:56:06,126 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-02 14:56:06,128 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:56:06,128 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-02 14:56:06,129 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/1588230740/rep_barrier 2023-06-02 14:56:06,130 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/1588230740/rep_barrier 2023-06-02 14:56:06,130 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-02 14:56:06,131 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:56:06,131 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-02 14:56:06,133 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/1588230740/table 2023-06-02 14:56:06,133 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/1588230740/table 2023-06-02 14:56:06,134 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-02 14:56:06,134 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:56:06,137 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/1588230740 2023-06-02 14:56:06,140 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/1588230740 2023-06-02 14:56:06,144 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-02 14:56:06,147 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-02 14:56:06,148 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=760629, jitterRate=-0.0328102707862854}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-02 14:56:06,148 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-02 14:56:06,159 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685717766040 2023-06-02 14:56:06,176 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-02 14:56:06,177 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-02 14:56:06,177 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46785,1685717764597, state=OPEN 2023-06-02 14:56:06,180 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-02 14:56:06,180 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-02 14:56:06,185 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-02 14:56:06,185 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46785,1685717764597 in 315 msec 2023-06-02 14:56:06,191 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-02 14:56:06,191 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 512 msec 2023-06-02 14:56:06,197 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 753 msec 2023-06-02 14:56:06,197 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685717766197, completionTime=-1 2023-06-02 14:56:06,198 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-02 14:56:06,198 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-02 14:56:06,257 DEBUG [hconnection-0x60b11134-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-02 14:56:06,260 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51354, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-02 14:56:06,282 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-02 14:56:06,282 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685717826282 2023-06-02 14:56:06,282 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685717886282 2023-06-02 14:56:06,283 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 84 msec 2023-06-02 14:56:06,304 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43361,1685717763435-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-02 14:56:06,304 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43361,1685717763435-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-02 14:56:06,305 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43361,1685717763435-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-02 14:56:06,307 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:43361, period=300000, unit=MILLISECONDS is enabled. 2023-06-02 14:56:06,308 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-02 14:56:06,313 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-02 14:56:06,322 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-02 14:56:06,323 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-02 14:56:06,334 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-02 14:56:06,336 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-02 14:56:06,338 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-02 14:56:06,358 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/.tmp/data/hbase/namespace/8d03b33016ffc306c2bda423a7a53609 2023-06-02 14:56:06,360 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/.tmp/data/hbase/namespace/8d03b33016ffc306c2bda423a7a53609 empty. 2023-06-02 14:56:06,361 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/.tmp/data/hbase/namespace/8d03b33016ffc306c2bda423a7a53609 2023-06-02 14:56:06,361 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-02 14:56:06,419 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-02 14:56:06,422 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8d03b33016ffc306c2bda423a7a53609, NAME => 'hbase:namespace,,1685717766323.8d03b33016ffc306c2bda423a7a53609.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/.tmp 2023-06-02 14:56:06,438 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685717766323.8d03b33016ffc306c2bda423a7a53609.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:56:06,439 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 8d03b33016ffc306c2bda423a7a53609, disabling compactions & flushes 2023-06-02 14:56:06,439 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685717766323.8d03b33016ffc306c2bda423a7a53609. 2023-06-02 14:56:06,439 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685717766323.8d03b33016ffc306c2bda423a7a53609. 2023-06-02 14:56:06,439 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685717766323.8d03b33016ffc306c2bda423a7a53609. after waiting 0 ms 2023-06-02 14:56:06,439 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685717766323.8d03b33016ffc306c2bda423a7a53609. 2023-06-02 14:56:06,439 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685717766323.8d03b33016ffc306c2bda423a7a53609. 2023-06-02 14:56:06,439 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 8d03b33016ffc306c2bda423a7a53609: 2023-06-02 14:56:06,443 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-02 14:56:06,459 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685717766323.8d03b33016ffc306c2bda423a7a53609.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685717766446"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685717766446"}]},"ts":"1685717766446"} 2023-06-02 14:56:06,486 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-02 14:56:06,488 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-02 14:56:06,492 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685717766488"}]},"ts":"1685717766488"} 2023-06-02 14:56:06,498 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-02 14:56:06,508 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8d03b33016ffc306c2bda423a7a53609, ASSIGN}] 2023-06-02 14:56:06,510 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8d03b33016ffc306c2bda423a7a53609, ASSIGN 2023-06-02 14:56:06,512 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=8d03b33016ffc306c2bda423a7a53609, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46785,1685717764597; forceNewPlan=false, retain=false 2023-06-02 14:56:06,663 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=8d03b33016ffc306c2bda423a7a53609, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46785,1685717764597 2023-06-02 14:56:06,663 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685717766323.8d03b33016ffc306c2bda423a7a53609.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685717766663"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685717766663"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685717766663"}]},"ts":"1685717766663"} 2023-06-02 14:56:06,668 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 8d03b33016ffc306c2bda423a7a53609, server=jenkins-hbase4.apache.org,46785,1685717764597}] 2023-06-02 14:56:06,830 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685717766323.8d03b33016ffc306c2bda423a7a53609. 2023-06-02 14:56:06,831 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8d03b33016ffc306c2bda423a7a53609, NAME => 'hbase:namespace,,1685717766323.8d03b33016ffc306c2bda423a7a53609.', STARTKEY => '', ENDKEY => ''} 2023-06-02 14:56:06,833 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 8d03b33016ffc306c2bda423a7a53609 2023-06-02 14:56:06,833 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685717766323.8d03b33016ffc306c2bda423a7a53609.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:56:06,833 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8d03b33016ffc306c2bda423a7a53609 2023-06-02 14:56:06,833 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8d03b33016ffc306c2bda423a7a53609 2023-06-02 14:56:06,835 INFO [StoreOpener-8d03b33016ffc306c2bda423a7a53609-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 8d03b33016ffc306c2bda423a7a53609 2023-06-02 14:56:06,838 DEBUG [StoreOpener-8d03b33016ffc306c2bda423a7a53609-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/namespace/8d03b33016ffc306c2bda423a7a53609/info 2023-06-02 14:56:06,838 DEBUG [StoreOpener-8d03b33016ffc306c2bda423a7a53609-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/namespace/8d03b33016ffc306c2bda423a7a53609/info 2023-06-02 14:56:06,838 INFO [StoreOpener-8d03b33016ffc306c2bda423a7a53609-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8d03b33016ffc306c2bda423a7a53609 columnFamilyName info 2023-06-02 14:56:06,839 INFO [StoreOpener-8d03b33016ffc306c2bda423a7a53609-1] regionserver.HStore(310): Store=8d03b33016ffc306c2bda423a7a53609/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:56:06,841 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/namespace/8d03b33016ffc306c2bda423a7a53609 2023-06-02 14:56:06,841 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/namespace/8d03b33016ffc306c2bda423a7a53609 2023-06-02 14:56:06,846 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8d03b33016ffc306c2bda423a7a53609 2023-06-02 14:56:06,849 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/namespace/8d03b33016ffc306c2bda423a7a53609/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-02 14:56:06,850 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8d03b33016ffc306c2bda423a7a53609; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=745280, jitterRate=-0.052328258752822876}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-02 14:56:06,850 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8d03b33016ffc306c2bda423a7a53609: 2023-06-02 14:56:06,852 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685717766323.8d03b33016ffc306c2bda423a7a53609., pid=6, masterSystemTime=1685717766822 2023-06-02 14:56:06,856 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685717766323.8d03b33016ffc306c2bda423a7a53609. 2023-06-02 14:56:06,856 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685717766323.8d03b33016ffc306c2bda423a7a53609. 2023-06-02 14:56:06,858 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=8d03b33016ffc306c2bda423a7a53609, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46785,1685717764597 2023-06-02 14:56:06,858 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685717766323.8d03b33016ffc306c2bda423a7a53609.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685717766857"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685717766857"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685717766857"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685717766857"}]},"ts":"1685717766857"} 2023-06-02 14:56:06,866 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-02 14:56:06,866 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 8d03b33016ffc306c2bda423a7a53609, server=jenkins-hbase4.apache.org,46785,1685717764597 in 194 msec 2023-06-02 14:56:06,870 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-02 14:56:06,870 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=8d03b33016ffc306c2bda423a7a53609, ASSIGN in 358 msec 2023-06-02 14:56:06,871 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-02 14:56:06,872 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685717766872"}]},"ts":"1685717766872"} 2023-06-02 14:56:06,875 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-02 14:56:06,878 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-02 14:56:06,881 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 554 msec 2023-06-02 14:56:06,936 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-02 14:56:06,943 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-02 14:56:06,943 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:56:06,980 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-02 14:56:07,002 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-02 14:56:07,008 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 36 msec 2023-06-02 14:56:07,015 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-02 14:56:07,027 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-02 14:56:07,033 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 18 msec 2023-06-02 14:56:07,040 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-02 14:56:07,043 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-02 14:56:07,044 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.379sec 2023-06-02 14:56:07,046 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-02 14:56:07,047 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-02 14:56:07,047 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-02 14:56:07,048 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43361,1685717763435-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-02 14:56:07,049 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43361,1685717763435-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-02 14:56:07,060 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-02 14:56:07,139 DEBUG [Listener at localhost/40969] zookeeper.ReadOnlyZKClient(139): Connect 0x07965c57 to 127.0.0.1:51661 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-02 14:56:07,145 DEBUG [Listener at localhost/40969] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@24c517ed, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-02 14:56:07,159 DEBUG [hconnection-0x60e6c1ab-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-02 14:56:07,174 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51364, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-02 14:56:07,184 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,43361,1685717763435 2023-06-02 14:56:07,184 INFO [Listener at localhost/40969] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:56:07,192 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-02 14:56:07,193 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:56:07,194 INFO [Listener at localhost/40969] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-02 14:56:07,204 DEBUG [Listener at localhost/40969] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-02 14:56:07,208 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44664, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-02 14:56:07,217 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43361] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-02 14:56:07,217 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43361] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-02 14:56:07,220 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43361] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-02 14:56:07,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43361] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling 2023-06-02 14:56:07,225 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-06-02 14:56:07,227 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-02 14:56:07,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43361] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testSlowSyncLogRolling" procId is: 9 2023-06-02 14:56:07,231 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451 2023-06-02 14:56:07,232 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451 empty. 2023-06-02 14:56:07,234 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451 2023-06-02 14:56:07,234 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testSlowSyncLogRolling regions 2023-06-02 14:56:07,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43361] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-02 14:56:07,260 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/.tabledesc/.tableinfo.0000000001 2023-06-02 14:56:07,262 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3d22d2c04f19228a22376fb7200c4451, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/.tmp 2023-06-02 14:56:07,276 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:56:07,276 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1604): Closing 3d22d2c04f19228a22376fb7200c4451, disabling compactions & flushes 2023-06-02 14:56:07,276 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451. 2023-06-02 14:56:07,276 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451. 2023-06-02 14:56:07,276 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451. after waiting 0 ms 2023-06-02 14:56:07,276 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451. 2023-06-02 14:56:07,276 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451. 2023-06-02 14:56:07,276 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for 3d22d2c04f19228a22376fb7200c4451: 2023-06-02 14:56:07,280 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-06-02 14:56:07,282 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685717767282"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685717767282"}]},"ts":"1685717767282"} 2023-06-02 14:56:07,285 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-02 14:56:07,287 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-02 14:56:07,287 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685717767287"}]},"ts":"1685717767287"} 2023-06-02 14:56:07,289 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLING in hbase:meta 2023-06-02 14:56:07,294 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=3d22d2c04f19228a22376fb7200c4451, ASSIGN}] 2023-06-02 14:56:07,297 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=3d22d2c04f19228a22376fb7200c4451, ASSIGN 2023-06-02 14:56:07,299 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=3d22d2c04f19228a22376fb7200c4451, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46785,1685717764597; forceNewPlan=false, retain=false 2023-06-02 14:56:07,450 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=3d22d2c04f19228a22376fb7200c4451, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46785,1685717764597 2023-06-02 14:56:07,450 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685717767450"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685717767450"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685717767450"}]},"ts":"1685717767450"} 2023-06-02 14:56:07,454 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 3d22d2c04f19228a22376fb7200c4451, server=jenkins-hbase4.apache.org,46785,1685717764597}] 2023-06-02 14:56:07,614 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451. 2023-06-02 14:56:07,614 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3d22d2c04f19228a22376fb7200c4451, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451.', STARTKEY => '', ENDKEY => ''} 2023-06-02 14:56:07,615 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testSlowSyncLogRolling 3d22d2c04f19228a22376fb7200c4451 2023-06-02 14:56:07,615 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:56:07,615 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3d22d2c04f19228a22376fb7200c4451 2023-06-02 14:56:07,615 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3d22d2c04f19228a22376fb7200c4451 2023-06-02 14:56:07,617 INFO [StoreOpener-3d22d2c04f19228a22376fb7200c4451-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 3d22d2c04f19228a22376fb7200c4451 2023-06-02 14:56:07,619 DEBUG [StoreOpener-3d22d2c04f19228a22376fb7200c4451-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info 2023-06-02 14:56:07,619 DEBUG [StoreOpener-3d22d2c04f19228a22376fb7200c4451-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info 2023-06-02 14:56:07,620 INFO [StoreOpener-3d22d2c04f19228a22376fb7200c4451-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3d22d2c04f19228a22376fb7200c4451 columnFamilyName info 2023-06-02 14:56:07,621 INFO [StoreOpener-3d22d2c04f19228a22376fb7200c4451-1] regionserver.HStore(310): Store=3d22d2c04f19228a22376fb7200c4451/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:56:07,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451 2023-06-02 14:56:07,624 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451 2023-06-02 14:56:07,628 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3d22d2c04f19228a22376fb7200c4451 2023-06-02 14:56:07,630 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-02 14:56:07,631 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3d22d2c04f19228a22376fb7200c4451; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=794068, jitterRate=0.009709745645523071}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-02 14:56:07,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3d22d2c04f19228a22376fb7200c4451: 2023-06-02 14:56:07,632 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451., pid=11, masterSystemTime=1685717767608 2023-06-02 14:56:07,635 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451. 2023-06-02 14:56:07,635 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451. 2023-06-02 14:56:07,636 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=3d22d2c04f19228a22376fb7200c4451, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46785,1685717764597 2023-06-02 14:56:07,636 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685717767636"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685717767636"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685717767636"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685717767636"}]},"ts":"1685717767636"} 2023-06-02 14:56:07,643 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-02 14:56:07,643 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 3d22d2c04f19228a22376fb7200c4451, server=jenkins-hbase4.apache.org,46785,1685717764597 in 185 msec 2023-06-02 14:56:07,647 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-02 14:56:07,647 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=3d22d2c04f19228a22376fb7200c4451, ASSIGN in 349 msec 2023-06-02 14:56:07,648 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-02 14:56:07,649 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685717767649"}]},"ts":"1685717767649"} 2023-06-02 14:56:07,651 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLED in hbase:meta 2023-06-02 14:56:07,654 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-06-02 14:56:07,657 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling in 435 msec 2023-06-02 14:56:11,630 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-06-02 14:56:11,748 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-02 14:56:11,749 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-02 14:56:11,751 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testSlowSyncLogRolling' 2023-06-02 14:56:13,606 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-02 14:56:13,607 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-06-02 14:56:17,249 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43361] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-02 14:56:17,250 INFO [Listener at localhost/40969] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testSlowSyncLogRolling, procId: 9 completed 2023-06-02 14:56:17,254 DEBUG [Listener at localhost/40969] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testSlowSyncLogRolling 2023-06-02 14:56:17,255 DEBUG [Listener at localhost/40969] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451. 2023-06-02 14:56:29,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46785] regionserver.HRegion(9158): Flush requested on 3d22d2c04f19228a22376fb7200c4451 2023-06-02 14:56:29,285 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 3d22d2c04f19228a22376fb7200c4451 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-02 14:56:29,360 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/.tmp/info/baaf228a80f2464ab87f21349d507409 2023-06-02 14:56:29,404 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/.tmp/info/baaf228a80f2464ab87f21349d507409 as hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/baaf228a80f2464ab87f21349d507409 2023-06-02 14:56:29,414 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/baaf228a80f2464ab87f21349d507409, entries=7, sequenceid=11, filesize=12.1 K 2023-06-02 14:56:29,417 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 3d22d2c04f19228a22376fb7200c4451 in 132ms, sequenceid=11, compaction requested=false 2023-06-02 14:56:29,419 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 3d22d2c04f19228a22376fb7200c4451: 2023-06-02 14:56:37,497 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:43517,DS-960b30de-cf30-47cd-9def-035dd013f5b3,DISK], DatanodeInfoWithStorage[127.0.0.1:36973,DS-099657df-3c62-454f-b758-40bbc385c6a6,DISK]] 2023-06-02 14:56:39,701 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:43517,DS-960b30de-cf30-47cd-9def-035dd013f5b3,DISK], DatanodeInfoWithStorage[127.0.0.1:36973,DS-099657df-3c62-454f-b758-40bbc385c6a6,DISK]] 2023-06-02 14:56:41,904 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:43517,DS-960b30de-cf30-47cd-9def-035dd013f5b3,DISK], DatanodeInfoWithStorage[127.0.0.1:36973,DS-099657df-3c62-454f-b758-40bbc385c6a6,DISK]] 2023-06-02 14:56:44,108 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:43517,DS-960b30de-cf30-47cd-9def-035dd013f5b3,DISK], DatanodeInfoWithStorage[127.0.0.1:36973,DS-099657df-3c62-454f-b758-40bbc385c6a6,DISK]] 2023-06-02 14:56:44,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46785] regionserver.HRegion(9158): Flush requested on 3d22d2c04f19228a22376fb7200c4451 2023-06-02 14:56:44,108 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 3d22d2c04f19228a22376fb7200c4451 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-02 14:56:44,309 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:43517,DS-960b30de-cf30-47cd-9def-035dd013f5b3,DISK], DatanodeInfoWithStorage[127.0.0.1:36973,DS-099657df-3c62-454f-b758-40bbc385c6a6,DISK]] 2023-06-02 14:56:44,327 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=21 (bloomFilter=true), to=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/.tmp/info/12e936d0213a4a229165867ed65b78e8 2023-06-02 14:56:44,337 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/.tmp/info/12e936d0213a4a229165867ed65b78e8 as hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/12e936d0213a4a229165867ed65b78e8 2023-06-02 14:56:44,346 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/12e936d0213a4a229165867ed65b78e8, entries=7, sequenceid=21, filesize=12.1 K 2023-06-02 14:56:44,547 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:43517,DS-960b30de-cf30-47cd-9def-035dd013f5b3,DISK], DatanodeInfoWithStorage[127.0.0.1:36973,DS-099657df-3c62-454f-b758-40bbc385c6a6,DISK]] 2023-06-02 14:56:44,548 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 3d22d2c04f19228a22376fb7200c4451 in 439ms, sequenceid=21, compaction requested=false 2023-06-02 14:56:44,548 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 3d22d2c04f19228a22376fb7200c4451: 2023-06-02 14:56:44,548 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=24.2 K, sizeToCheck=16.0 K 2023-06-02 14:56:44,548 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-02 14:56:44,550 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/baaf228a80f2464ab87f21349d507409 because midkey is the same as first or last row 2023-06-02 14:56:46,311 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:43517,DS-960b30de-cf30-47cd-9def-035dd013f5b3,DISK], DatanodeInfoWithStorage[127.0.0.1:36973,DS-099657df-3c62-454f-b758-40bbc385c6a6,DISK]] 2023-06-02 14:56:48,514 WARN [sync.4] wal.AbstractFSWAL(1302): Requesting log roll because we exceeded slow sync threshold; count=7, threshold=5, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:43517,DS-960b30de-cf30-47cd-9def-035dd013f5b3,DISK], DatanodeInfoWithStorage[127.0.0.1:36973,DS-099657df-3c62-454f-b758-40bbc385c6a6,DISK]] 2023-06-02 14:56:48,515 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C46785%2C1685717764597:(num 1685717765941) roll requested 2023-06-02 14:56:48,516 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:43517,DS-960b30de-cf30-47cd-9def-035dd013f5b3,DISK], DatanodeInfoWithStorage[127.0.0.1:36973,DS-099657df-3c62-454f-b758-40bbc385c6a6,DISK]] 2023-06-02 14:56:48,728 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:43517,DS-960b30de-cf30-47cd-9def-035dd013f5b3,DISK], DatanodeInfoWithStorage[127.0.0.1:36973,DS-099657df-3c62-454f-b758-40bbc385c6a6,DISK]] 2023-06-02 14:56:48,730 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/WALs/jenkins-hbase4.apache.org,46785,1685717764597/jenkins-hbase4.apache.org%2C46785%2C1685717764597.1685717765941 with entries=24, filesize=20.43 KB; new WAL /user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/WALs/jenkins-hbase4.apache.org,46785,1685717764597/jenkins-hbase4.apache.org%2C46785%2C1685717764597.1685717808516 2023-06-02 14:56:48,731 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36973,DS-099657df-3c62-454f-b758-40bbc385c6a6,DISK], DatanodeInfoWithStorage[127.0.0.1:43517,DS-960b30de-cf30-47cd-9def-035dd013f5b3,DISK]] 2023-06-02 14:56:48,731 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/WALs/jenkins-hbase4.apache.org,46785,1685717764597/jenkins-hbase4.apache.org%2C46785%2C1685717764597.1685717765941 is not closed yet, will try archiving it next time 2023-06-02 14:56:58,528 INFO [Listener at localhost/40969] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-06-02 14:57:03,531 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 5001 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:36973,DS-099657df-3c62-454f-b758-40bbc385c6a6,DISK], DatanodeInfoWithStorage[127.0.0.1:43517,DS-960b30de-cf30-47cd-9def-035dd013f5b3,DISK]] 2023-06-02 14:57:03,531 WARN [sync.0] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5001 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:36973,DS-099657df-3c62-454f-b758-40bbc385c6a6,DISK], DatanodeInfoWithStorage[127.0.0.1:43517,DS-960b30de-cf30-47cd-9def-035dd013f5b3,DISK]] 2023-06-02 14:57:03,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46785] regionserver.HRegion(9158): Flush requested on 3d22d2c04f19228a22376fb7200c4451 2023-06-02 14:57:03,531 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C46785%2C1685717764597:(num 1685717808516) roll requested 2023-06-02 14:57:03,532 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 3d22d2c04f19228a22376fb7200c4451 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-02 14:57:05,532 INFO [Listener at localhost/40969] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-06-02 14:57:08,533 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 5001 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:36973,DS-099657df-3c62-454f-b758-40bbc385c6a6,DISK], DatanodeInfoWithStorage[127.0.0.1:43517,DS-960b30de-cf30-47cd-9def-035dd013f5b3,DISK]] 2023-06-02 14:57:08,533 WARN [sync.1] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5001 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:36973,DS-099657df-3c62-454f-b758-40bbc385c6a6,DISK], DatanodeInfoWithStorage[127.0.0.1:43517,DS-960b30de-cf30-47cd-9def-035dd013f5b3,DISK]] 2023-06-02 14:57:08,545 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:36973,DS-099657df-3c62-454f-b758-40bbc385c6a6,DISK], DatanodeInfoWithStorage[127.0.0.1:43517,DS-960b30de-cf30-47cd-9def-035dd013f5b3,DISK]] 2023-06-02 14:57:08,545 WARN [sync.2] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:36973,DS-099657df-3c62-454f-b758-40bbc385c6a6,DISK], DatanodeInfoWithStorage[127.0.0.1:43517,DS-960b30de-cf30-47cd-9def-035dd013f5b3,DISK]] 2023-06-02 14:57:08,547 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/WALs/jenkins-hbase4.apache.org,46785,1685717764597/jenkins-hbase4.apache.org%2C46785%2C1685717764597.1685717808516 with entries=6, filesize=6.07 KB; new WAL /user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/WALs/jenkins-hbase4.apache.org,46785,1685717764597/jenkins-hbase4.apache.org%2C46785%2C1685717764597.1685717823532 2023-06-02 14:57:08,547 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36973,DS-099657df-3c62-454f-b758-40bbc385c6a6,DISK], DatanodeInfoWithStorage[127.0.0.1:43517,DS-960b30de-cf30-47cd-9def-035dd013f5b3,DISK]] 2023-06-02 14:57:08,548 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/WALs/jenkins-hbase4.apache.org,46785,1685717764597/jenkins-hbase4.apache.org%2C46785%2C1685717764597.1685717808516 is not closed yet, will try archiving it next time 2023-06-02 14:57:08,552 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=31 (bloomFilter=true), to=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/.tmp/info/428ee3a820af45db8784a76c12bba69f 2023-06-02 14:57:08,561 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/.tmp/info/428ee3a820af45db8784a76c12bba69f as hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/428ee3a820af45db8784a76c12bba69f 2023-06-02 14:57:08,569 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/428ee3a820af45db8784a76c12bba69f, entries=7, sequenceid=31, filesize=12.1 K 2023-06-02 14:57:08,572 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 3d22d2c04f19228a22376fb7200c4451 in 5041ms, sequenceid=31, compaction requested=true 2023-06-02 14:57:08,572 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 3d22d2c04f19228a22376fb7200c4451: 2023-06-02 14:57:08,573 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=36.3 K, sizeToCheck=16.0 K 2023-06-02 14:57:08,573 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-02 14:57:08,573 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/baaf228a80f2464ab87f21349d507409 because midkey is the same as first or last row 2023-06-02 14:57:08,575 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-02 14:57:08,575 DEBUG [RS:0;jenkins-hbase4:46785-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-02 14:57:08,580 DEBUG [RS:0;jenkins-hbase4:46785-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 37197 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-02 14:57:08,582 DEBUG [RS:0;jenkins-hbase4:46785-shortCompactions-0] regionserver.HStore(1912): 3d22d2c04f19228a22376fb7200c4451/info is initiating minor compaction (all files) 2023-06-02 14:57:08,582 INFO [RS:0;jenkins-hbase4:46785-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 3d22d2c04f19228a22376fb7200c4451/info in TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451. 2023-06-02 14:57:08,583 INFO [RS:0;jenkins-hbase4:46785-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/baaf228a80f2464ab87f21349d507409, hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/12e936d0213a4a229165867ed65b78e8, hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/428ee3a820af45db8784a76c12bba69f] into tmpdir=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/.tmp, totalSize=36.3 K 2023-06-02 14:57:08,584 DEBUG [RS:0;jenkins-hbase4:46785-shortCompactions-0] compactions.Compactor(207): Compacting baaf228a80f2464ab87f21349d507409, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1685717777260 2023-06-02 14:57:08,584 DEBUG [RS:0;jenkins-hbase4:46785-shortCompactions-0] compactions.Compactor(207): Compacting 12e936d0213a4a229165867ed65b78e8, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=21, earliestPutTs=1685717791286 2023-06-02 14:57:08,585 DEBUG [RS:0;jenkins-hbase4:46785-shortCompactions-0] compactions.Compactor(207): Compacting 428ee3a820af45db8784a76c12bba69f, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=31, earliestPutTs=1685717806109 2023-06-02 14:57:08,611 INFO [RS:0;jenkins-hbase4:46785-shortCompactions-0] throttle.PressureAwareThroughputController(145): 3d22d2c04f19228a22376fb7200c4451#info#compaction#3 average throughput is 10.77 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-02 14:57:08,631 DEBUG [RS:0;jenkins-hbase4:46785-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/.tmp/info/b5602734261647d8972ecd4048364b09 as hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/b5602734261647d8972ecd4048364b09 2023-06-02 14:57:08,647 INFO [RS:0;jenkins-hbase4:46785-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 3d22d2c04f19228a22376fb7200c4451/info of 3d22d2c04f19228a22376fb7200c4451 into b5602734261647d8972ecd4048364b09(size=27.0 K), total size for store is 27.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-02 14:57:08,647 DEBUG [RS:0;jenkins-hbase4:46785-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 3d22d2c04f19228a22376fb7200c4451: 2023-06-02 14:57:08,647 INFO [RS:0;jenkins-hbase4:46785-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451., storeName=3d22d2c04f19228a22376fb7200c4451/info, priority=13, startTime=1685717828575; duration=0sec 2023-06-02 14:57:08,648 DEBUG [RS:0;jenkins-hbase4:46785-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=27.0 K, sizeToCheck=16.0 K 2023-06-02 14:57:08,649 DEBUG [RS:0;jenkins-hbase4:46785-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-02 14:57:08,649 DEBUG [RS:0;jenkins-hbase4:46785-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/b5602734261647d8972ecd4048364b09 because midkey is the same as first or last row 2023-06-02 14:57:08,649 DEBUG [RS:0;jenkins-hbase4:46785-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-02 14:57:20,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46785] regionserver.HRegion(9158): Flush requested on 3d22d2c04f19228a22376fb7200c4451 2023-06-02 14:57:20,654 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 3d22d2c04f19228a22376fb7200c4451 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-02 14:57:20,672 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=42 (bloomFilter=true), to=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/.tmp/info/fd08535461fb43079343006bb346603b 2023-06-02 14:57:20,680 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/.tmp/info/fd08535461fb43079343006bb346603b as hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/fd08535461fb43079343006bb346603b 2023-06-02 14:57:20,687 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/fd08535461fb43079343006bb346603b, entries=7, sequenceid=42, filesize=12.1 K 2023-06-02 14:57:20,688 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 3d22d2c04f19228a22376fb7200c4451 in 34ms, sequenceid=42, compaction requested=false 2023-06-02 14:57:20,689 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 3d22d2c04f19228a22376fb7200c4451: 2023-06-02 14:57:20,689 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=39.1 K, sizeToCheck=16.0 K 2023-06-02 14:57:20,689 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-02 14:57:20,689 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/b5602734261647d8972ecd4048364b09 because midkey is the same as first or last row 2023-06-02 14:57:28,664 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-02 14:57:28,665 INFO [Listener at localhost/40969] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-06-02 14:57:28,665 DEBUG [Listener at localhost/40969] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x07965c57 to 127.0.0.1:51661 2023-06-02 14:57:28,665 DEBUG [Listener at localhost/40969] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 14:57:28,666 DEBUG [Listener at localhost/40969] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-02 14:57:28,666 DEBUG [Listener at localhost/40969] util.JVMClusterUtil(257): Found active master hash=956715569, stopped=false 2023-06-02 14:57:28,666 INFO [Listener at localhost/40969] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,43361,1685717763435 2023-06-02 14:57:28,668 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-02 14:57:28,668 INFO [Listener at localhost/40969] procedure2.ProcedureExecutor(629): Stopping 2023-06-02 14:57:28,668 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): regionserver:46785-0x1008c0a451f0001, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-02 14:57:28,669 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:57:28,669 DEBUG [Listener at localhost/40969] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x108ed0ed to 127.0.0.1:51661 2023-06-02 14:57:28,669 DEBUG [Listener at localhost/40969] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 14:57:28,670 INFO [Listener at localhost/40969] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,46785,1685717764597' ***** 2023-06-02 14:57:28,670 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 14:57:28,670 INFO [Listener at localhost/40969] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-02 14:57:28,670 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46785-0x1008c0a451f0001, quorum=127.0.0.1:51661, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 14:57:28,670 INFO [RS:0;jenkins-hbase4:46785] regionserver.HeapMemoryManager(220): Stopping 2023-06-02 14:57:28,671 INFO [RS:0;jenkins-hbase4:46785] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-02 14:57:28,671 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-02 14:57:28,671 INFO [RS:0;jenkins-hbase4:46785] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-02 14:57:28,671 INFO [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(3303): Received CLOSE for 8d03b33016ffc306c2bda423a7a53609 2023-06-02 14:57:28,672 INFO [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(3303): Received CLOSE for 3d22d2c04f19228a22376fb7200c4451 2023-06-02 14:57:28,672 INFO [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46785,1685717764597 2023-06-02 14:57:28,672 DEBUG [RS:0;jenkins-hbase4:46785] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3d9304b6 to 127.0.0.1:51661 2023-06-02 14:57:28,672 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8d03b33016ffc306c2bda423a7a53609, disabling compactions & flushes 2023-06-02 14:57:28,672 DEBUG [RS:0;jenkins-hbase4:46785] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 14:57:28,672 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685717766323.8d03b33016ffc306c2bda423a7a53609. 2023-06-02 14:57:28,673 INFO [RS:0;jenkins-hbase4:46785] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-02 14:57:28,673 INFO [RS:0;jenkins-hbase4:46785] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-02 14:57:28,673 INFO [RS:0;jenkins-hbase4:46785] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-02 14:57:28,673 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685717766323.8d03b33016ffc306c2bda423a7a53609. 2023-06-02 14:57:28,673 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685717766323.8d03b33016ffc306c2bda423a7a53609. after waiting 0 ms 2023-06-02 14:57:28,673 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685717766323.8d03b33016ffc306c2bda423a7a53609. 2023-06-02 14:57:28,673 INFO [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-02 14:57:28,673 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 8d03b33016ffc306c2bda423a7a53609 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-02 14:57:28,673 INFO [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-06-02 14:57:28,673 DEBUG [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 8d03b33016ffc306c2bda423a7a53609=hbase:namespace,,1685717766323.8d03b33016ffc306c2bda423a7a53609., 3d22d2c04f19228a22376fb7200c4451=TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451.} 2023-06-02 14:57:28,675 DEBUG [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1504): Waiting on 1588230740, 3d22d2c04f19228a22376fb7200c4451, 8d03b33016ffc306c2bda423a7a53609 2023-06-02 14:57:28,675 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-02 14:57:28,675 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-02 14:57:28,676 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-02 14:57:28,676 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-02 14:57:28,676 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-02 14:57:28,676 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.87 KB heapSize=5.38 KB 2023-06-02 14:57:28,719 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.64 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/1588230740/.tmp/info/271af9b9d1c34d2692ed841a73f08be8 2023-06-02 14:57:28,719 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/namespace/8d03b33016ffc306c2bda423a7a53609/.tmp/info/40d2d3b88ed8415081550a66699018c4 2023-06-02 14:57:28,739 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/namespace/8d03b33016ffc306c2bda423a7a53609/.tmp/info/40d2d3b88ed8415081550a66699018c4 as hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/namespace/8d03b33016ffc306c2bda423a7a53609/info/40d2d3b88ed8415081550a66699018c4 2023-06-02 14:57:28,752 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/namespace/8d03b33016ffc306c2bda423a7a53609/info/40d2d3b88ed8415081550a66699018c4, entries=2, sequenceid=6, filesize=4.8 K 2023-06-02 14:57:28,754 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 8d03b33016ffc306c2bda423a7a53609 in 81ms, sequenceid=6, compaction requested=false 2023-06-02 14:57:28,754 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=232 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/1588230740/.tmp/table/ac880d7f9b0f497f92eeb13dd75307a2 2023-06-02 14:57:28,763 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/namespace/8d03b33016ffc306c2bda423a7a53609/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-02 14:57:28,765 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685717766323.8d03b33016ffc306c2bda423a7a53609. 2023-06-02 14:57:28,765 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8d03b33016ffc306c2bda423a7a53609: 2023-06-02 14:57:28,766 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685717766323.8d03b33016ffc306c2bda423a7a53609. 2023-06-02 14:57:28,766 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/1588230740/.tmp/info/271af9b9d1c34d2692ed841a73f08be8 as hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/1588230740/info/271af9b9d1c34d2692ed841a73f08be8 2023-06-02 14:57:28,766 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3d22d2c04f19228a22376fb7200c4451, disabling compactions & flushes 2023-06-02 14:57:28,766 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451. 2023-06-02 14:57:28,766 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451. 2023-06-02 14:57:28,766 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451. after waiting 0 ms 2023-06-02 14:57:28,766 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451. 2023-06-02 14:57:28,766 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 3d22d2c04f19228a22376fb7200c4451 1/1 column families, dataSize=3.15 KB heapSize=3.63 KB 2023-06-02 14:57:28,774 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/1588230740/info/271af9b9d1c34d2692ed841a73f08be8, entries=20, sequenceid=14, filesize=7.4 K 2023-06-02 14:57:28,780 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/1588230740/.tmp/table/ac880d7f9b0f497f92eeb13dd75307a2 as hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/1588230740/table/ac880d7f9b0f497f92eeb13dd75307a2 2023-06-02 14:57:28,787 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.15 KB at sequenceid=48 (bloomFilter=true), to=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/.tmp/info/4277d261d93844c18eb428c0d3a0fa06 2023-06-02 14:57:28,788 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-06-02 14:57:28,788 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-06-02 14:57:28,792 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/1588230740/table/ac880d7f9b0f497f92eeb13dd75307a2, entries=4, sequenceid=14, filesize=4.8 K 2023-06-02 14:57:28,794 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.87 KB/2934, heapSize ~5.09 KB/5216, currentSize=0 B/0 for 1588230740 in 118ms, sequenceid=14, compaction requested=false 2023-06-02 14:57:28,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/.tmp/info/4277d261d93844c18eb428c0d3a0fa06 as hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/4277d261d93844c18eb428c0d3a0fa06 2023-06-02 14:57:28,805 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-06-02 14:57:28,806 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-02 14:57:28,807 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-02 14:57:28,807 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-02 14:57:28,808 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-02 14:57:28,809 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/4277d261d93844c18eb428c0d3a0fa06, entries=3, sequenceid=48, filesize=7.9 K 2023-06-02 14:57:28,811 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.15 KB/3228, heapSize ~3.61 KB/3696, currentSize=0 B/0 for 3d22d2c04f19228a22376fb7200c4451 in 45ms, sequenceid=48, compaction requested=true 2023-06-02 14:57:28,814 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/baaf228a80f2464ab87f21349d507409, hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/12e936d0213a4a229165867ed65b78e8, hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/428ee3a820af45db8784a76c12bba69f] to archive 2023-06-02 14:57:28,815 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-02 14:57:28,821 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/baaf228a80f2464ab87f21349d507409 to hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/archive/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/baaf228a80f2464ab87f21349d507409 2023-06-02 14:57:28,823 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/12e936d0213a4a229165867ed65b78e8 to hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/archive/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/12e936d0213a4a229165867ed65b78e8 2023-06-02 14:57:28,825 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/428ee3a820af45db8784a76c12bba69f to hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/archive/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/info/428ee3a820af45db8784a76c12bba69f 2023-06-02 14:57:28,853 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/data/default/TestLogRolling-testSlowSyncLogRolling/3d22d2c04f19228a22376fb7200c4451/recovered.edits/51.seqid, newMaxSeqId=51, maxSeqId=1 2023-06-02 14:57:28,855 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451. 2023-06-02 14:57:28,856 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3d22d2c04f19228a22376fb7200c4451: 2023-06-02 14:57:28,856 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testSlowSyncLogRolling,,1685717767216.3d22d2c04f19228a22376fb7200c4451. 2023-06-02 14:57:28,875 INFO [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46785,1685717764597; all regions closed. 2023-06-02 14:57:28,877 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/WALs/jenkins-hbase4.apache.org,46785,1685717764597 2023-06-02 14:57:28,888 DEBUG [RS:0;jenkins-hbase4:46785] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/oldWALs 2023-06-02 14:57:28,888 INFO [RS:0;jenkins-hbase4:46785] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C46785%2C1685717764597.meta:.meta(num 1685717766073) 2023-06-02 14:57:28,889 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/WALs/jenkins-hbase4.apache.org,46785,1685717764597 2023-06-02 14:57:28,904 DEBUG [RS:0;jenkins-hbase4:46785] wal.AbstractFSWAL(1028): Moved 3 WAL file(s) to /user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/oldWALs 2023-06-02 14:57:28,905 INFO [RS:0;jenkins-hbase4:46785] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C46785%2C1685717764597:(num 1685717823532) 2023-06-02 14:57:28,905 DEBUG [RS:0;jenkins-hbase4:46785] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 14:57:28,905 INFO [RS:0;jenkins-hbase4:46785] regionserver.LeaseManager(133): Closed leases 2023-06-02 14:57:28,905 INFO [RS:0;jenkins-hbase4:46785] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-02 14:57:28,905 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-02 14:57:28,907 INFO [RS:0;jenkins-hbase4:46785] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46785 2023-06-02 14:57:28,917 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-02 14:57:28,917 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): regionserver:46785-0x1008c0a451f0001, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46785,1685717764597 2023-06-02 14:57:28,917 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): regionserver:46785-0x1008c0a451f0001, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-02 14:57:28,918 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46785,1685717764597] 2023-06-02 14:57:28,918 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46785,1685717764597; numProcessing=1 2023-06-02 14:57:28,920 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46785,1685717764597 already deleted, retry=false 2023-06-02 14:57:28,921 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46785,1685717764597 expired; onlineServers=0 2023-06-02 14:57:28,921 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,43361,1685717763435' ***** 2023-06-02 14:57:28,921 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-02 14:57:28,921 DEBUG [M:0;jenkins-hbase4:43361] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@783b580a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-02 14:57:28,921 INFO [M:0;jenkins-hbase4:43361] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43361,1685717763435 2023-06-02 14:57:28,921 INFO [M:0;jenkins-hbase4:43361] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43361,1685717763435; all regions closed. 2023-06-02 14:57:28,921 DEBUG [M:0;jenkins-hbase4:43361] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 14:57:28,921 DEBUG [M:0;jenkins-hbase4:43361] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-02 14:57:28,922 DEBUG [M:0;jenkins-hbase4:43361] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-02 14:57:28,922 INFO [M:0;jenkins-hbase4:43361] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-02 14:57:28,922 INFO [M:0;jenkins-hbase4:43361] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-02 14:57:28,922 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-02 14:57:28,922 INFO [M:0;jenkins-hbase4:43361] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-06-02 14:57:28,922 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685717765567] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685717765567,5,FailOnTimeoutGroup] 2023-06-02 14:57:28,922 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685717765567] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685717765567,5,FailOnTimeoutGroup] 2023-06-02 14:57:28,923 DEBUG [M:0;jenkins-hbase4:43361] master.HMaster(1512): Stopping service threads 2023-06-02 14:57:28,924 INFO [M:0;jenkins-hbase4:43361] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-02 14:57:28,925 INFO [M:0;jenkins-hbase4:43361] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-02 14:57:28,925 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-02 14:57:28,927 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-02 14:57:28,927 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:57:28,927 DEBUG [M:0;jenkins-hbase4:43361] zookeeper.ZKUtil(398): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-02 14:57:28,927 WARN [M:0;jenkins-hbase4:43361] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-02 14:57:28,927 INFO [M:0;jenkins-hbase4:43361] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-02 14:57:28,927 INFO [M:0;jenkins-hbase4:43361] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-02 14:57:28,928 DEBUG [M:0;jenkins-hbase4:43361] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-02 14:57:28,928 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-02 14:57:28,928 INFO [M:0;jenkins-hbase4:43361] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:57:28,928 DEBUG [M:0;jenkins-hbase4:43361] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:57:28,928 DEBUG [M:0;jenkins-hbase4:43361] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-02 14:57:28,928 DEBUG [M:0;jenkins-hbase4:43361] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:57:28,929 INFO [M:0;jenkins-hbase4:43361] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.27 KB heapSize=46.71 KB 2023-06-02 14:57:28,963 INFO [M:0;jenkins-hbase4:43361] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.27 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7d005fa629344a649d7514ac982fa7e1 2023-06-02 14:57:28,970 INFO [M:0;jenkins-hbase4:43361] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7d005fa629344a649d7514ac982fa7e1 2023-06-02 14:57:28,971 DEBUG [M:0;jenkins-hbase4:43361] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7d005fa629344a649d7514ac982fa7e1 as hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7d005fa629344a649d7514ac982fa7e1 2023-06-02 14:57:28,978 INFO [M:0;jenkins-hbase4:43361] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7d005fa629344a649d7514ac982fa7e1 2023-06-02 14:57:28,979 INFO [M:0;jenkins-hbase4:43361] regionserver.HStore(1080): Added hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7d005fa629344a649d7514ac982fa7e1, entries=11, sequenceid=100, filesize=6.1 K 2023-06-02 14:57:28,980 INFO [M:0;jenkins-hbase4:43361] regionserver.HRegion(2948): Finished flush of dataSize ~38.27 KB/39184, heapSize ~46.70 KB/47816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 52ms, sequenceid=100, compaction requested=false 2023-06-02 14:57:28,982 INFO [M:0;jenkins-hbase4:43361] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:57:28,982 DEBUG [M:0;jenkins-hbase4:43361] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-02 14:57:28,988 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/MasterData/WALs/jenkins-hbase4.apache.org,43361,1685717763435 2023-06-02 14:57:28,996 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-02 14:57:28,996 INFO [M:0;jenkins-hbase4:43361] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-02 14:57:28,997 INFO [M:0;jenkins-hbase4:43361] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43361 2023-06-02 14:57:29,000 DEBUG [M:0;jenkins-hbase4:43361] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,43361,1685717763435 already deleted, retry=false 2023-06-02 14:57:29,020 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): regionserver:46785-0x1008c0a451f0001, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 14:57:29,020 INFO [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46785,1685717764597; zookeeper connection closed. 2023-06-02 14:57:29,020 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): regionserver:46785-0x1008c0a451f0001, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 14:57:29,021 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2062d00a] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2062d00a 2023-06-02 14:57:29,021 INFO [Listener at localhost/40969] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-02 14:57:29,120 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 14:57:29,120 INFO [M:0;jenkins-hbase4:43361] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43361,1685717763435; zookeeper connection closed. 2023-06-02 14:57:29,120 DEBUG [Listener at localhost/40969-EventThread] zookeeper.ZKWatcher(600): master:43361-0x1008c0a451f0000, quorum=127.0.0.1:51661, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 14:57:29,122 WARN [Listener at localhost/40969] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-02 14:57:29,126 INFO [Listener at localhost/40969] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 14:57:29,230 WARN [BP-420267747-172.31.14.131-1685717760344 heartbeating to localhost/127.0.0.1:42517] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-02 14:57:29,230 WARN [BP-420267747-172.31.14.131-1685717760344 heartbeating to localhost/127.0.0.1:42517] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-420267747-172.31.14.131-1685717760344 (Datanode Uuid bee256ca-c51d-4f0d-87f0-c5e120e1a39a) service to localhost/127.0.0.1:42517 2023-06-02 14:57:29,232 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/cluster_ba83aa0c-5dd3-8da7-41f4-59bf75d412fd/dfs/data/data3/current/BP-420267747-172.31.14.131-1685717760344] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:57:29,232 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/cluster_ba83aa0c-5dd3-8da7-41f4-59bf75d412fd/dfs/data/data4/current/BP-420267747-172.31.14.131-1685717760344] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:57:29,233 WARN [Listener at localhost/40969] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-02 14:57:29,235 INFO [Listener at localhost/40969] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 14:57:29,338 WARN [BP-420267747-172.31.14.131-1685717760344 heartbeating to localhost/127.0.0.1:42517] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-02 14:57:29,339 WARN [BP-420267747-172.31.14.131-1685717760344 heartbeating to localhost/127.0.0.1:42517] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-420267747-172.31.14.131-1685717760344 (Datanode Uuid 24f6b340-d19e-440a-bf0b-5eae755dd490) service to localhost/127.0.0.1:42517 2023-06-02 14:57:29,339 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/cluster_ba83aa0c-5dd3-8da7-41f4-59bf75d412fd/dfs/data/data1/current/BP-420267747-172.31.14.131-1685717760344] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:57:29,339 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/cluster_ba83aa0c-5dd3-8da7-41f4-59bf75d412fd/dfs/data/data2/current/BP-420267747-172.31.14.131-1685717760344] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:57:29,374 INFO [Listener at localhost/40969] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 14:57:29,487 INFO [Listener at localhost/40969] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-02 14:57:29,530 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-02 14:57:29,542 INFO [Listener at localhost/40969] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=51 (was 10) Potentially hanging thread: SnapshotHandlerChoreCleaner sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3693) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (67771434) connection to localhost/127.0.0.1:42517 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase4:0.leaseChecker java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.regionserver.LeaseManager.run(LeaseManager.java:82) Potentially hanging thread: LeaseRenewer:jenkins.hfs.0@localhost:42517 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SessionTracker java.lang.Thread.sleep(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:151) Potentially hanging thread: nioEventLoopGroup-4-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Monitor thread for TaskMonitor java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:327) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:42517 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HBase-Metrics2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-2 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase4:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Potentially hanging thread: nioEventLoopGroup-5-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: nioEventLoopGroup-2-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcClient-timer-pool-0 java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:600) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:496) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.PeerCache@25d59225 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (67771434) connection to localhost/127.0.0.1:42517 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-1-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (67771434) connection to localhost/127.0.0.1:42517 from jenkins.hfs.0 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-2-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/40969 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Idle-Rpc-Conn-Sweeper-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=440 (was 263) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=73 (was 184), ProcessCount=170 (was 170), AvailableMemoryMB=1706 (was 2242) 2023-06-02 14:57:29,551 INFO [Listener at localhost/40969] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=52, OpenFileDescriptor=440, MaxFileDescriptor=60000, SystemLoadAverage=73, ProcessCount=170, AvailableMemoryMB=1705 2023-06-02 14:57:29,551 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-02 14:57:29,551 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/hadoop.log.dir so I do NOT create it in target/test-data/acaee100-ec95-af91-c9b7-5b553504f892 2023-06-02 14:57:29,552 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ac0c3264-e716-5efa-223f-09031f809f38/hadoop.tmp.dir so I do NOT create it in target/test-data/acaee100-ec95-af91-c9b7-5b553504f892 2023-06-02 14:57:29,552 INFO [Listener at localhost/40969] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4, deleteOnExit=true 2023-06-02 14:57:29,552 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-02 14:57:29,552 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/test.cache.data in system properties and HBase conf 2023-06-02 14:57:29,552 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/hadoop.tmp.dir in system properties and HBase conf 2023-06-02 14:57:29,552 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/hadoop.log.dir in system properties and HBase conf 2023-06-02 14:57:29,552 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-02 14:57:29,552 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-02 14:57:29,552 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-02 14:57:29,553 DEBUG [Listener at localhost/40969] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-02 14:57:29,553 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-02 14:57:29,553 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-02 14:57:29,553 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-02 14:57:29,553 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-02 14:57:29,554 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-02 14:57:29,554 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-02 14:57:29,554 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-02 14:57:29,554 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-02 14:57:29,554 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-02 14:57:29,554 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/nfs.dump.dir in system properties and HBase conf 2023-06-02 14:57:29,554 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/java.io.tmpdir in system properties and HBase conf 2023-06-02 14:57:29,554 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-02 14:57:29,555 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-02 14:57:29,555 INFO [Listener at localhost/40969] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-02 14:57:29,556 WARN [Listener at localhost/40969] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-02 14:57:29,560 WARN [Listener at localhost/40969] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-02 14:57:29,560 WARN [Listener at localhost/40969] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-02 14:57:29,600 WARN [Listener at localhost/40969] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 14:57:29,602 INFO [Listener at localhost/40969] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 14:57:29,606 INFO [Listener at localhost/40969] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/java.io.tmpdir/Jetty_localhost_40509_hdfs____.hws02c/webapp 2023-06-02 14:57:29,700 INFO [Listener at localhost/40969] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40509 2023-06-02 14:57:29,702 WARN [Listener at localhost/40969] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-02 14:57:29,705 WARN [Listener at localhost/40969] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-02 14:57:29,705 WARN [Listener at localhost/40969] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-02 14:57:29,747 WARN [Listener at localhost/34397] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 14:57:29,757 WARN [Listener at localhost/34397] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-02 14:57:29,760 WARN [Listener at localhost/34397] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 14:57:29,762 INFO [Listener at localhost/34397] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 14:57:29,766 INFO [Listener at localhost/34397] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/java.io.tmpdir/Jetty_localhost_44249_datanode____n59rfi/webapp 2023-06-02 14:57:29,825 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-02 14:57:29,863 INFO [Listener at localhost/34397] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44249 2023-06-02 14:57:29,872 WARN [Listener at localhost/33207] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 14:57:29,891 WARN [Listener at localhost/33207] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-02 14:57:29,894 WARN [Listener at localhost/33207] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 14:57:29,895 INFO [Listener at localhost/33207] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 14:57:29,902 INFO [Listener at localhost/33207] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/java.io.tmpdir/Jetty_localhost_41195_datanode____15cg9j/webapp 2023-06-02 14:57:29,992 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x381c4b7736043190: Processing first storage report for DS-36374a88-acf3-46cb-8f32-23f63ad5facb from datanode a9cb3d3e-740a-4e49-b4c0-a95b05c6ad71 2023-06-02 14:57:29,992 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x381c4b7736043190: from storage DS-36374a88-acf3-46cb-8f32-23f63ad5facb node DatanodeRegistration(127.0.0.1:34867, datanodeUuid=a9cb3d3e-740a-4e49-b4c0-a95b05c6ad71, infoPort=45971, infoSecurePort=0, ipcPort=33207, storageInfo=lv=-57;cid=testClusterID;nsid=1368835467;c=1685717849563), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:57:29,992 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x381c4b7736043190: Processing first storage report for DS-2aa35dd9-31c1-4669-862c-c99dd1ac4614 from datanode a9cb3d3e-740a-4e49-b4c0-a95b05c6ad71 2023-06-02 14:57:29,992 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x381c4b7736043190: from storage DS-2aa35dd9-31c1-4669-862c-c99dd1ac4614 node DatanodeRegistration(127.0.0.1:34867, datanodeUuid=a9cb3d3e-740a-4e49-b4c0-a95b05c6ad71, infoPort=45971, infoSecurePort=0, ipcPort=33207, storageInfo=lv=-57;cid=testClusterID;nsid=1368835467;c=1685717849563), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:57:30,020 INFO [Listener at localhost/33207] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41195 2023-06-02 14:57:30,032 WARN [Listener at localhost/46485] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 14:57:30,130 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa1051ae524abefec: Processing first storage report for DS-557da12f-a58a-4e52-8aaf-2a570ccab906 from datanode ae50a0db-af61-48db-96b8-ea952100721c 2023-06-02 14:57:30,130 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa1051ae524abefec: from storage DS-557da12f-a58a-4e52-8aaf-2a570ccab906 node DatanodeRegistration(127.0.0.1:39667, datanodeUuid=ae50a0db-af61-48db-96b8-ea952100721c, infoPort=46743, infoSecurePort=0, ipcPort=46485, storageInfo=lv=-57;cid=testClusterID;nsid=1368835467;c=1685717849563), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:57:30,130 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa1051ae524abefec: Processing first storage report for DS-b69e889c-bf94-4ad2-9cb6-921f188e963d from datanode ae50a0db-af61-48db-96b8-ea952100721c 2023-06-02 14:57:30,130 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa1051ae524abefec: from storage DS-b69e889c-bf94-4ad2-9cb6-921f188e963d node DatanodeRegistration(127.0.0.1:39667, datanodeUuid=ae50a0db-af61-48db-96b8-ea952100721c, infoPort=46743, infoSecurePort=0, ipcPort=46485, storageInfo=lv=-57;cid=testClusterID;nsid=1368835467;c=1685717849563), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:57:30,148 DEBUG [Listener at localhost/46485] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892 2023-06-02 14:57:30,153 INFO [Listener at localhost/46485] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/zookeeper_0, clientPort=52513, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-02 14:57:30,155 INFO [Listener at localhost/46485] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=52513 2023-06-02 14:57:30,156 INFO [Listener at localhost/46485] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:57:30,157 INFO [Listener at localhost/46485] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:57:30,175 INFO [Listener at localhost/46485] util.FSUtils(471): Created version file at hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e with version=8 2023-06-02 14:57:30,176 INFO [Listener at localhost/46485] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/hbase-staging 2023-06-02 14:57:30,178 INFO [Listener at localhost/46485] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-06-02 14:57:30,178 INFO [Listener at localhost/46485] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 14:57:30,178 INFO [Listener at localhost/46485] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-02 14:57:30,178 INFO [Listener at localhost/46485] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-02 14:57:30,178 INFO [Listener at localhost/46485] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 14:57:30,178 INFO [Listener at localhost/46485] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-02 14:57:30,178 INFO [Listener at localhost/46485] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-02 14:57:30,179 INFO [Listener at localhost/46485] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39103 2023-06-02 14:57:30,180 INFO [Listener at localhost/46485] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:57:30,181 INFO [Listener at localhost/46485] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:57:30,182 INFO [Listener at localhost/46485] zookeeper.RecoverableZooKeeper(93): Process identifier=master:39103 connecting to ZooKeeper ensemble=127.0.0.1:52513 2023-06-02 14:57:30,194 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:391030x0, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-02 14:57:30,195 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:39103-0x1008c0b9b2a0000 connected 2023-06-02 14:57:30,220 DEBUG [Listener at localhost/46485] zookeeper.ZKUtil(164): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-02 14:57:30,221 DEBUG [Listener at localhost/46485] zookeeper.ZKUtil(164): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 14:57:30,221 DEBUG [Listener at localhost/46485] zookeeper.ZKUtil(164): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-02 14:57:30,222 DEBUG [Listener at localhost/46485] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39103 2023-06-02 14:57:30,222 DEBUG [Listener at localhost/46485] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39103 2023-06-02 14:57:30,222 DEBUG [Listener at localhost/46485] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39103 2023-06-02 14:57:30,225 DEBUG [Listener at localhost/46485] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39103 2023-06-02 14:57:30,226 DEBUG [Listener at localhost/46485] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39103 2023-06-02 14:57:30,226 INFO [Listener at localhost/46485] master.HMaster(444): hbase.rootdir=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e, hbase.cluster.distributed=false 2023-06-02 14:57:30,241 INFO [Listener at localhost/46485] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-06-02 14:57:30,241 INFO [Listener at localhost/46485] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 14:57:30,241 INFO [Listener at localhost/46485] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-02 14:57:30,241 INFO [Listener at localhost/46485] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-02 14:57:30,241 INFO [Listener at localhost/46485] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 14:57:30,241 INFO [Listener at localhost/46485] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-02 14:57:30,241 INFO [Listener at localhost/46485] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-02 14:57:30,243 INFO [Listener at localhost/46485] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36927 2023-06-02 14:57:30,243 INFO [Listener at localhost/46485] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-02 14:57:30,245 DEBUG [Listener at localhost/46485] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-02 14:57:30,245 INFO [Listener at localhost/46485] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:57:30,247 INFO [Listener at localhost/46485] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:57:30,248 INFO [Listener at localhost/46485] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36927 connecting to ZooKeeper ensemble=127.0.0.1:52513 2023-06-02 14:57:30,252 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): regionserver:369270x0, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-02 14:57:30,253 DEBUG [Listener at localhost/46485] zookeeper.ZKUtil(164): regionserver:369270x0, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-02 14:57:30,253 DEBUG [Listener at localhost/46485] zookeeper.ZKUtil(164): regionserver:369270x0, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 14:57:30,254 DEBUG [Listener at localhost/46485] zookeeper.ZKUtil(164): regionserver:369270x0, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-02 14:57:30,259 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36927-0x1008c0b9b2a0001 connected 2023-06-02 14:57:30,260 DEBUG [Listener at localhost/46485] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36927 2023-06-02 14:57:30,260 DEBUG [Listener at localhost/46485] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36927 2023-06-02 14:57:30,263 DEBUG [Listener at localhost/46485] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36927 2023-06-02 14:57:30,263 DEBUG [Listener at localhost/46485] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36927 2023-06-02 14:57:30,266 DEBUG [Listener at localhost/46485] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36927 2023-06-02 14:57:30,267 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,39103,1685717850177 2023-06-02 14:57:30,269 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-02 14:57:30,269 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,39103,1685717850177 2023-06-02 14:57:30,270 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): regionserver:36927-0x1008c0b9b2a0001, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-02 14:57:30,270 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-02 14:57:30,271 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:57:30,271 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-02 14:57:30,272 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-02 14:57:30,272 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,39103,1685717850177 from backup master directory 2023-06-02 14:57:30,275 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,39103,1685717850177 2023-06-02 14:57:30,275 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-02 14:57:30,275 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-02 14:57:30,275 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,39103,1685717850177 2023-06-02 14:57:30,292 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/hbase.id with ID: ca074223-c8e2-414c-bc0c-ee9b49bb81e8 2023-06-02 14:57:30,304 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:57:30,307 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:57:30,318 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x58a7c9a0 to 127.0.0.1:52513 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-02 14:57:30,323 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6b9d3ca7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-02 14:57:30,323 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-02 14:57:30,325 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-02 14:57:30,325 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-02 14:57:30,326 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/MasterData/data/master/store-tmp 2023-06-02 14:57:30,338 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:57:30,338 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-02 14:57:30,338 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:57:30,338 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:57:30,339 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-02 14:57:30,339 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:57:30,339 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:57:30,339 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-02 14:57:30,339 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/MasterData/WALs/jenkins-hbase4.apache.org,39103,1685717850177 2023-06-02 14:57:30,342 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39103%2C1685717850177, suffix=, logDir=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/MasterData/WALs/jenkins-hbase4.apache.org,39103,1685717850177, archiveDir=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/MasterData/oldWALs, maxLogs=10 2023-06-02 14:57:30,351 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/MasterData/WALs/jenkins-hbase4.apache.org,39103,1685717850177/jenkins-hbase4.apache.org%2C39103%2C1685717850177.1685717850343 2023-06-02 14:57:30,351 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK], DatanodeInfoWithStorage[127.0.0.1:39667,DS-557da12f-a58a-4e52-8aaf-2a570ccab906,DISK]] 2023-06-02 14:57:30,351 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-02 14:57:30,351 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:57:30,351 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-02 14:57:30,351 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-02 14:57:30,353 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-02 14:57:30,355 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-02 14:57:30,355 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-02 14:57:30,356 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:57:30,357 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-02 14:57:30,358 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-02 14:57:30,360 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-02 14:57:30,362 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-02 14:57:30,363 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=781180, jitterRate=-0.006679370999336243}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-02 14:57:30,363 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-02 14:57:30,365 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-02 14:57:30,367 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-02 14:57:30,367 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-02 14:57:30,367 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-02 14:57:30,367 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-02 14:57:30,368 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-02 14:57:30,368 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-02 14:57:30,370 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-02 14:57:30,371 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-02 14:57:30,382 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-02 14:57:30,382 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-02 14:57:30,383 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-02 14:57:30,383 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-02 14:57:30,384 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-02 14:57:30,386 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:57:30,387 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-02 14:57:30,387 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-02 14:57:30,388 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-02 14:57:30,390 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-02 14:57:30,391 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): regionserver:36927-0x1008c0b9b2a0001, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-02 14:57:30,391 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:57:30,391 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,39103,1685717850177, sessionid=0x1008c0b9b2a0000, setting cluster-up flag (Was=false) 2023-06-02 14:57:30,395 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:57:30,400 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-02 14:57:30,401 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,39103,1685717850177 2023-06-02 14:57:30,404 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:57:30,409 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-02 14:57:30,409 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,39103,1685717850177 2023-06-02 14:57:30,410 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/.hbase-snapshot/.tmp 2023-06-02 14:57:30,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-02 14:57:30,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-02 14:57:30,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-02 14:57:30,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-02 14:57:30,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-02 14:57:30,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-06-02 14:57:30,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:57:30,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-02 14:57:30,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:57:30,419 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685717880419 2023-06-02 14:57:30,419 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-02 14:57:30,419 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-02 14:57:30,419 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-02 14:57:30,419 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-02 14:57:30,419 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-02 14:57:30,419 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-02 14:57:30,419 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-02 14:57:30,420 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-02 14:57:30,420 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-02 14:57:30,420 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-02 14:57:30,420 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-02 14:57:30,420 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-02 14:57:30,421 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-02 14:57:30,421 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-02 14:57:30,421 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685717850421,5,FailOnTimeoutGroup] 2023-06-02 14:57:30,421 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685717850421,5,FailOnTimeoutGroup] 2023-06-02 14:57:30,421 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-02 14:57:30,421 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-02 14:57:30,421 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-02 14:57:30,421 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-02 14:57:30,422 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-02 14:57:30,436 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-02 14:57:30,436 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-02 14:57:30,437 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e 2023-06-02 14:57:30,449 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:57:30,451 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-02 14:57:30,453 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/meta/1588230740/info 2023-06-02 14:57:30,453 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-02 14:57:30,454 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:57:30,454 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-02 14:57:30,456 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/meta/1588230740/rep_barrier 2023-06-02 14:57:30,456 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-02 14:57:30,457 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:57:30,457 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-02 14:57:30,458 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/meta/1588230740/table 2023-06-02 14:57:30,459 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-02 14:57:30,459 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:57:30,460 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/meta/1588230740 2023-06-02 14:57:30,461 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/meta/1588230740 2023-06-02 14:57:30,463 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-02 14:57:30,465 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-02 14:57:30,467 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-02 14:57:30,468 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=835201, jitterRate=0.06201407313346863}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-02 14:57:30,468 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-02 14:57:30,468 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-02 14:57:30,468 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-02 14:57:30,468 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-02 14:57:30,468 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-02 14:57:30,468 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-02 14:57:30,468 INFO [RS:0;jenkins-hbase4:36927] regionserver.HRegionServer(951): ClusterId : ca074223-c8e2-414c-bc0c-ee9b49bb81e8 2023-06-02 14:57:30,469 DEBUG [RS:0;jenkins-hbase4:36927] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-02 14:57:30,469 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-02 14:57:30,469 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-02 14:57:30,471 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-02 14:57:30,471 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-02 14:57:30,471 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-02 14:57:30,471 DEBUG [RS:0;jenkins-hbase4:36927] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-02 14:57:30,471 DEBUG [RS:0;jenkins-hbase4:36927] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-02 14:57:30,473 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-02 14:57:30,474 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-02 14:57:30,478 DEBUG [RS:0;jenkins-hbase4:36927] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-02 14:57:30,479 DEBUG [RS:0;jenkins-hbase4:36927] zookeeper.ReadOnlyZKClient(139): Connect 0x00dcc112 to 127.0.0.1:52513 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-02 14:57:30,483 DEBUG [RS:0;jenkins-hbase4:36927] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@e1b3f79, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-02 14:57:30,483 DEBUG [RS:0;jenkins-hbase4:36927] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@a24aa62, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-02 14:57:30,492 DEBUG [RS:0;jenkins-hbase4:36927] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:36927 2023-06-02 14:57:30,492 INFO [RS:0;jenkins-hbase4:36927] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-02 14:57:30,492 INFO [RS:0;jenkins-hbase4:36927] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-02 14:57:30,492 DEBUG [RS:0;jenkins-hbase4:36927] regionserver.HRegionServer(1022): About to register with Master. 2023-06-02 14:57:30,493 INFO [RS:0;jenkins-hbase4:36927] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,39103,1685717850177 with isa=jenkins-hbase4.apache.org/172.31.14.131:36927, startcode=1685717850240 2023-06-02 14:57:30,493 DEBUG [RS:0;jenkins-hbase4:36927] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-02 14:57:30,497 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59171, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-06-02 14:57:30,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39103] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36927,1685717850240 2023-06-02 14:57:30,499 DEBUG [RS:0;jenkins-hbase4:36927] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e 2023-06-02 14:57:30,499 DEBUG [RS:0;jenkins-hbase4:36927] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34397 2023-06-02 14:57:30,499 DEBUG [RS:0;jenkins-hbase4:36927] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-02 14:57:30,501 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-02 14:57:30,501 DEBUG [RS:0;jenkins-hbase4:36927] zookeeper.ZKUtil(162): regionserver:36927-0x1008c0b9b2a0001, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36927,1685717850240 2023-06-02 14:57:30,502 WARN [RS:0;jenkins-hbase4:36927] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-02 14:57:30,502 INFO [RS:0;jenkins-hbase4:36927] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-02 14:57:30,502 DEBUG [RS:0;jenkins-hbase4:36927] regionserver.HRegionServer(1946): logDir=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240 2023-06-02 14:57:30,502 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36927,1685717850240] 2023-06-02 14:57:30,506 DEBUG [RS:0;jenkins-hbase4:36927] zookeeper.ZKUtil(162): regionserver:36927-0x1008c0b9b2a0001, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36927,1685717850240 2023-06-02 14:57:30,507 DEBUG [RS:0;jenkins-hbase4:36927] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-02 14:57:30,507 INFO [RS:0;jenkins-hbase4:36927] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-02 14:57:30,509 INFO [RS:0;jenkins-hbase4:36927] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-02 14:57:30,511 INFO [RS:0;jenkins-hbase4:36927] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-02 14:57:30,511 INFO [RS:0;jenkins-hbase4:36927] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-02 14:57:30,511 INFO [RS:0;jenkins-hbase4:36927] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-02 14:57:30,512 INFO [RS:0;jenkins-hbase4:36927] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-02 14:57:30,512 DEBUG [RS:0;jenkins-hbase4:36927] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:57:30,512 DEBUG [RS:0;jenkins-hbase4:36927] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:57:30,512 DEBUG [RS:0;jenkins-hbase4:36927] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:57:30,512 DEBUG [RS:0;jenkins-hbase4:36927] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:57:30,512 DEBUG [RS:0;jenkins-hbase4:36927] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:57:30,512 DEBUG [RS:0;jenkins-hbase4:36927] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-02 14:57:30,512 DEBUG [RS:0;jenkins-hbase4:36927] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:57:30,512 DEBUG [RS:0;jenkins-hbase4:36927] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:57:30,512 DEBUG [RS:0;jenkins-hbase4:36927] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:57:30,513 DEBUG [RS:0;jenkins-hbase4:36927] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:57:30,513 INFO [RS:0;jenkins-hbase4:36927] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-02 14:57:30,513 INFO [RS:0;jenkins-hbase4:36927] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-02 14:57:30,513 INFO [RS:0;jenkins-hbase4:36927] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-02 14:57:30,525 INFO [RS:0;jenkins-hbase4:36927] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-02 14:57:30,525 INFO [RS:0;jenkins-hbase4:36927] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36927,1685717850240-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-02 14:57:30,535 INFO [RS:0;jenkins-hbase4:36927] regionserver.Replication(203): jenkins-hbase4.apache.org,36927,1685717850240 started 2023-06-02 14:57:30,536 INFO [RS:0;jenkins-hbase4:36927] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36927,1685717850240, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36927, sessionid=0x1008c0b9b2a0001 2023-06-02 14:57:30,536 DEBUG [RS:0;jenkins-hbase4:36927] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-02 14:57:30,536 DEBUG [RS:0;jenkins-hbase4:36927] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36927,1685717850240 2023-06-02 14:57:30,536 DEBUG [RS:0;jenkins-hbase4:36927] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36927,1685717850240' 2023-06-02 14:57:30,536 DEBUG [RS:0;jenkins-hbase4:36927] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-02 14:57:30,536 DEBUG [RS:0;jenkins-hbase4:36927] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-02 14:57:30,537 DEBUG [RS:0;jenkins-hbase4:36927] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-02 14:57:30,537 DEBUG [RS:0;jenkins-hbase4:36927] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-02 14:57:30,537 DEBUG [RS:0;jenkins-hbase4:36927] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36927,1685717850240 2023-06-02 14:57:30,537 DEBUG [RS:0;jenkins-hbase4:36927] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36927,1685717850240' 2023-06-02 14:57:30,537 DEBUG [RS:0;jenkins-hbase4:36927] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-02 14:57:30,537 DEBUG [RS:0;jenkins-hbase4:36927] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-02 14:57:30,538 DEBUG [RS:0;jenkins-hbase4:36927] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-02 14:57:30,538 INFO [RS:0;jenkins-hbase4:36927] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-02 14:57:30,538 INFO [RS:0;jenkins-hbase4:36927] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-02 14:57:30,625 DEBUG [jenkins-hbase4:39103] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-02 14:57:30,626 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,36927,1685717850240, state=OPENING 2023-06-02 14:57:30,627 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-02 14:57:30,630 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:57:30,630 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-02 14:57:30,630 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,36927,1685717850240}] 2023-06-02 14:57:30,640 INFO [RS:0;jenkins-hbase4:36927] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36927%2C1685717850240, suffix=, logDir=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240, archiveDir=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/oldWALs, maxLogs=32 2023-06-02 14:57:30,653 INFO [RS:0;jenkins-hbase4:36927] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717850642 2023-06-02 14:57:30,653 DEBUG [RS:0;jenkins-hbase4:36927] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39667,DS-557da12f-a58a-4e52-8aaf-2a570ccab906,DISK], DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK]] 2023-06-02 14:57:30,785 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,36927,1685717850240 2023-06-02 14:57:30,786 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-02 14:57:30,788 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35916, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-02 14:57:30,792 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-02 14:57:30,792 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-02 14:57:30,794 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36927%2C1685717850240.meta, suffix=.meta, logDir=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240, archiveDir=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/oldWALs, maxLogs=32 2023-06-02 14:57:30,807 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.meta.1685717850796.meta 2023-06-02 14:57:30,807 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39667,DS-557da12f-a58a-4e52-8aaf-2a570ccab906,DISK], DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK]] 2023-06-02 14:57:30,808 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-02 14:57:30,808 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-02 14:57:30,808 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-02 14:57:30,809 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-02 14:57:30,809 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-02 14:57:30,809 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:57:30,809 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-02 14:57:30,809 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-02 14:57:30,811 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-02 14:57:30,812 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/meta/1588230740/info 2023-06-02 14:57:30,812 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/meta/1588230740/info 2023-06-02 14:57:30,812 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-02 14:57:30,813 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:57:30,813 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-02 14:57:30,814 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/meta/1588230740/rep_barrier 2023-06-02 14:57:30,814 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/meta/1588230740/rep_barrier 2023-06-02 14:57:30,815 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-02 14:57:30,815 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:57:30,815 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-02 14:57:30,816 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/meta/1588230740/table 2023-06-02 14:57:30,816 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/meta/1588230740/table 2023-06-02 14:57:30,817 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-02 14:57:30,818 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:57:30,819 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/meta/1588230740 2023-06-02 14:57:30,820 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/meta/1588230740 2023-06-02 14:57:30,823 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-02 14:57:30,824 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-02 14:57:30,825 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=743344, jitterRate=-0.054789721965789795}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-02 14:57:30,825 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-02 14:57:30,827 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685717850785 2023-06-02 14:57:30,830 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-02 14:57:30,831 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-02 14:57:30,831 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,36927,1685717850240, state=OPEN 2023-06-02 14:57:30,833 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-02 14:57:30,834 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-02 14:57:30,837 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-02 14:57:30,837 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,36927,1685717850240 in 204 msec 2023-06-02 14:57:30,839 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-02 14:57:30,839 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 366 msec 2023-06-02 14:57:30,842 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 429 msec 2023-06-02 14:57:30,842 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685717850842, completionTime=-1 2023-06-02 14:57:30,842 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-02 14:57:30,843 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-02 14:57:30,845 DEBUG [hconnection-0x3d63a84-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-02 14:57:30,847 INFO [RS-EventLoopGroup-6-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35920, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-02 14:57:30,848 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-02 14:57:30,848 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685717910848 2023-06-02 14:57:30,848 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685717970848 2023-06-02 14:57:30,848 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-06-02 14:57:30,854 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39103,1685717850177-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-02 14:57:30,854 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39103,1685717850177-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-02 14:57:30,854 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39103,1685717850177-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-02 14:57:30,854 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:39103, period=300000, unit=MILLISECONDS is enabled. 2023-06-02 14:57:30,854 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-02 14:57:30,854 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-02 14:57:30,855 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-02 14:57:30,856 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-02 14:57:30,856 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-02 14:57:30,857 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-02 14:57:30,858 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-02 14:57:30,860 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/.tmp/data/hbase/namespace/081a7d5337ab77ae6d277fd69317a346 2023-06-02 14:57:30,861 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/.tmp/data/hbase/namespace/081a7d5337ab77ae6d277fd69317a346 empty. 2023-06-02 14:57:30,862 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/.tmp/data/hbase/namespace/081a7d5337ab77ae6d277fd69317a346 2023-06-02 14:57:30,862 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-02 14:57:30,874 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-02 14:57:30,875 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 081a7d5337ab77ae6d277fd69317a346, NAME => 'hbase:namespace,,1685717850854.081a7d5337ab77ae6d277fd69317a346.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/.tmp 2023-06-02 14:57:30,884 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685717850854.081a7d5337ab77ae6d277fd69317a346.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:57:30,885 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 081a7d5337ab77ae6d277fd69317a346, disabling compactions & flushes 2023-06-02 14:57:30,885 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685717850854.081a7d5337ab77ae6d277fd69317a346. 2023-06-02 14:57:30,885 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685717850854.081a7d5337ab77ae6d277fd69317a346. 2023-06-02 14:57:30,885 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685717850854.081a7d5337ab77ae6d277fd69317a346. after waiting 0 ms 2023-06-02 14:57:30,885 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685717850854.081a7d5337ab77ae6d277fd69317a346. 2023-06-02 14:57:30,885 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685717850854.081a7d5337ab77ae6d277fd69317a346. 2023-06-02 14:57:30,885 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 081a7d5337ab77ae6d277fd69317a346: 2023-06-02 14:57:30,888 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-02 14:57:30,889 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685717850854.081a7d5337ab77ae6d277fd69317a346.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685717850889"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685717850889"}]},"ts":"1685717850889"} 2023-06-02 14:57:30,892 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-02 14:57:30,893 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-02 14:57:30,894 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685717850893"}]},"ts":"1685717850893"} 2023-06-02 14:57:30,895 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-02 14:57:30,903 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=081a7d5337ab77ae6d277fd69317a346, ASSIGN}] 2023-06-02 14:57:30,905 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=081a7d5337ab77ae6d277fd69317a346, ASSIGN 2023-06-02 14:57:30,906 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=081a7d5337ab77ae6d277fd69317a346, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36927,1685717850240; forceNewPlan=false, retain=false 2023-06-02 14:57:31,058 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=081a7d5337ab77ae6d277fd69317a346, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36927,1685717850240 2023-06-02 14:57:31,058 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685717850854.081a7d5337ab77ae6d277fd69317a346.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685717851057"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685717851057"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685717851057"}]},"ts":"1685717851057"} 2023-06-02 14:57:31,060 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 081a7d5337ab77ae6d277fd69317a346, server=jenkins-hbase4.apache.org,36927,1685717850240}] 2023-06-02 14:57:31,218 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685717850854.081a7d5337ab77ae6d277fd69317a346. 2023-06-02 14:57:31,219 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 081a7d5337ab77ae6d277fd69317a346, NAME => 'hbase:namespace,,1685717850854.081a7d5337ab77ae6d277fd69317a346.', STARTKEY => '', ENDKEY => ''} 2023-06-02 14:57:31,219 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 081a7d5337ab77ae6d277fd69317a346 2023-06-02 14:57:31,219 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685717850854.081a7d5337ab77ae6d277fd69317a346.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:57:31,219 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 081a7d5337ab77ae6d277fd69317a346 2023-06-02 14:57:31,219 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 081a7d5337ab77ae6d277fd69317a346 2023-06-02 14:57:31,221 INFO [StoreOpener-081a7d5337ab77ae6d277fd69317a346-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 081a7d5337ab77ae6d277fd69317a346 2023-06-02 14:57:31,222 DEBUG [StoreOpener-081a7d5337ab77ae6d277fd69317a346-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/namespace/081a7d5337ab77ae6d277fd69317a346/info 2023-06-02 14:57:31,223 DEBUG [StoreOpener-081a7d5337ab77ae6d277fd69317a346-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/namespace/081a7d5337ab77ae6d277fd69317a346/info 2023-06-02 14:57:31,223 INFO [StoreOpener-081a7d5337ab77ae6d277fd69317a346-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 081a7d5337ab77ae6d277fd69317a346 columnFamilyName info 2023-06-02 14:57:31,224 INFO [StoreOpener-081a7d5337ab77ae6d277fd69317a346-1] regionserver.HStore(310): Store=081a7d5337ab77ae6d277fd69317a346/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:57:31,225 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/namespace/081a7d5337ab77ae6d277fd69317a346 2023-06-02 14:57:31,226 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/namespace/081a7d5337ab77ae6d277fd69317a346 2023-06-02 14:57:31,229 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 081a7d5337ab77ae6d277fd69317a346 2023-06-02 14:57:31,231 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/namespace/081a7d5337ab77ae6d277fd69317a346/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-02 14:57:31,232 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 081a7d5337ab77ae6d277fd69317a346; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=861159, jitterRate=0.09502097964286804}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-02 14:57:31,232 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 081a7d5337ab77ae6d277fd69317a346: 2023-06-02 14:57:31,234 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685717850854.081a7d5337ab77ae6d277fd69317a346., pid=6, masterSystemTime=1685717851214 2023-06-02 14:57:31,236 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685717850854.081a7d5337ab77ae6d277fd69317a346. 2023-06-02 14:57:31,236 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685717850854.081a7d5337ab77ae6d277fd69317a346. 2023-06-02 14:57:31,237 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=081a7d5337ab77ae6d277fd69317a346, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36927,1685717850240 2023-06-02 14:57:31,238 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685717850854.081a7d5337ab77ae6d277fd69317a346.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685717851237"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685717851237"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685717851237"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685717851237"}]},"ts":"1685717851237"} 2023-06-02 14:57:31,243 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-02 14:57:31,243 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 081a7d5337ab77ae6d277fd69317a346, server=jenkins-hbase4.apache.org,36927,1685717850240 in 180 msec 2023-06-02 14:57:31,246 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-02 14:57:31,246 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=081a7d5337ab77ae6d277fd69317a346, ASSIGN in 340 msec 2023-06-02 14:57:31,247 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-02 14:57:31,248 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685717851247"}]},"ts":"1685717851247"} 2023-06-02 14:57:31,249 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-02 14:57:31,252 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-02 14:57:31,254 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 397 msec 2023-06-02 14:57:31,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-02 14:57:31,260 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-02 14:57:31,260 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:57:31,265 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-02 14:57:31,275 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-02 14:57:31,280 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 14 msec 2023-06-02 14:57:31,287 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-02 14:57:31,295 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-02 14:57:31,301 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-06-02 14:57:31,312 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-02 14:57:31,315 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-02 14:57:31,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.040sec 2023-06-02 14:57:31,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-02 14:57:31,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-02 14:57:31,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-02 14:57:31,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39103,1685717850177-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-02 14:57:31,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39103,1685717850177-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-02 14:57:31,318 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-02 14:57:31,368 DEBUG [Listener at localhost/46485] zookeeper.ReadOnlyZKClient(139): Connect 0x6ca9c02f to 127.0.0.1:52513 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-02 14:57:31,372 DEBUG [Listener at localhost/46485] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@167f49d9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-02 14:57:31,374 DEBUG [hconnection-0x2617e6f5-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-02 14:57:31,377 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35924, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-02 14:57:31,379 INFO [Listener at localhost/46485] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,39103,1685717850177 2023-06-02 14:57:31,379 INFO [Listener at localhost/46485] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:57:31,382 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-02 14:57:31,382 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:57:31,383 INFO [Listener at localhost/46485] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-02 14:57:31,395 INFO [Listener at localhost/46485] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-06-02 14:57:31,396 INFO [Listener at localhost/46485] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 14:57:31,396 INFO [Listener at localhost/46485] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-02 14:57:31,396 INFO [Listener at localhost/46485] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-02 14:57:31,396 INFO [Listener at localhost/46485] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 14:57:31,396 INFO [Listener at localhost/46485] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-02 14:57:31,396 INFO [Listener at localhost/46485] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-02 14:57:31,398 INFO [Listener at localhost/46485] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33179 2023-06-02 14:57:31,398 INFO [Listener at localhost/46485] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-02 14:57:31,399 DEBUG [Listener at localhost/46485] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-02 14:57:31,399 INFO [Listener at localhost/46485] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:57:31,401 INFO [Listener at localhost/46485] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:57:31,402 INFO [Listener at localhost/46485] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33179 connecting to ZooKeeper ensemble=127.0.0.1:52513 2023-06-02 14:57:31,406 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): regionserver:331790x0, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-02 14:57:31,407 DEBUG [Listener at localhost/46485] zookeeper.ZKUtil(162): regionserver:331790x0, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-02 14:57:31,407 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33179-0x1008c0b9b2a0005 connected 2023-06-02 14:57:31,408 DEBUG [Listener at localhost/46485] zookeeper.ZKUtil(162): regionserver:33179-0x1008c0b9b2a0005, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-06-02 14:57:31,409 DEBUG [Listener at localhost/46485] zookeeper.ZKUtil(164): regionserver:33179-0x1008c0b9b2a0005, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-02 14:57:31,409 DEBUG [Listener at localhost/46485] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33179 2023-06-02 14:57:31,409 DEBUG [Listener at localhost/46485] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33179 2023-06-02 14:57:31,410 DEBUG [Listener at localhost/46485] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33179 2023-06-02 14:57:31,411 DEBUG [Listener at localhost/46485] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33179 2023-06-02 14:57:31,411 DEBUG [Listener at localhost/46485] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33179 2023-06-02 14:57:31,413 INFO [RS:1;jenkins-hbase4:33179] regionserver.HRegionServer(951): ClusterId : ca074223-c8e2-414c-bc0c-ee9b49bb81e8 2023-06-02 14:57:31,413 DEBUG [RS:1;jenkins-hbase4:33179] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-02 14:57:31,416 DEBUG [RS:1;jenkins-hbase4:33179] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-02 14:57:31,416 DEBUG [RS:1;jenkins-hbase4:33179] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-02 14:57:31,418 DEBUG [RS:1;jenkins-hbase4:33179] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-02 14:57:31,419 DEBUG [RS:1;jenkins-hbase4:33179] zookeeper.ReadOnlyZKClient(139): Connect 0x11d32d5c to 127.0.0.1:52513 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-02 14:57:31,424 DEBUG [RS:1;jenkins-hbase4:33179] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1f14d116, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-02 14:57:31,424 DEBUG [RS:1;jenkins-hbase4:33179] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1b2b462c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-02 14:57:31,433 DEBUG [RS:1;jenkins-hbase4:33179] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:33179 2023-06-02 14:57:31,434 INFO [RS:1;jenkins-hbase4:33179] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-02 14:57:31,434 INFO [RS:1;jenkins-hbase4:33179] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-02 14:57:31,434 DEBUG [RS:1;jenkins-hbase4:33179] regionserver.HRegionServer(1022): About to register with Master. 2023-06-02 14:57:31,435 INFO [RS:1;jenkins-hbase4:33179] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,39103,1685717850177 with isa=jenkins-hbase4.apache.org/172.31.14.131:33179, startcode=1685717851395 2023-06-02 14:57:31,435 DEBUG [RS:1;jenkins-hbase4:33179] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-02 14:57:31,438 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57873, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-06-02 14:57:31,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39103] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33179,1685717851395 2023-06-02 14:57:31,438 DEBUG [RS:1;jenkins-hbase4:33179] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e 2023-06-02 14:57:31,439 DEBUG [RS:1;jenkins-hbase4:33179] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34397 2023-06-02 14:57:31,439 DEBUG [RS:1;jenkins-hbase4:33179] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-02 14:57:31,441 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): regionserver:36927-0x1008c0b9b2a0001, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-02 14:57:31,441 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-02 14:57:31,442 DEBUG [RS:1;jenkins-hbase4:33179] zookeeper.ZKUtil(162): regionserver:33179-0x1008c0b9b2a0005, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33179,1685717851395 2023-06-02 14:57:31,442 WARN [RS:1;jenkins-hbase4:33179] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-02 14:57:31,442 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33179,1685717851395] 2023-06-02 14:57:31,442 INFO [RS:1;jenkins-hbase4:33179] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-02 14:57:31,442 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36927-0x1008c0b9b2a0001, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36927,1685717850240 2023-06-02 14:57:31,442 DEBUG [RS:1;jenkins-hbase4:33179] regionserver.HRegionServer(1946): logDir=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,33179,1685717851395 2023-06-02 14:57:31,443 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36927-0x1008c0b9b2a0001, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33179,1685717851395 2023-06-02 14:57:31,448 DEBUG [RS:1;jenkins-hbase4:33179] zookeeper.ZKUtil(162): regionserver:33179-0x1008c0b9b2a0005, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36927,1685717850240 2023-06-02 14:57:31,449 DEBUG [RS:1;jenkins-hbase4:33179] zookeeper.ZKUtil(162): regionserver:33179-0x1008c0b9b2a0005, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33179,1685717851395 2023-06-02 14:57:31,450 DEBUG [RS:1;jenkins-hbase4:33179] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-02 14:57:31,450 INFO [RS:1;jenkins-hbase4:33179] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-02 14:57:31,454 INFO [RS:1;jenkins-hbase4:33179] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-02 14:57:31,454 INFO [RS:1;jenkins-hbase4:33179] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-02 14:57:31,455 INFO [RS:1;jenkins-hbase4:33179] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-02 14:57:31,455 INFO [RS:1;jenkins-hbase4:33179] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-02 14:57:31,456 INFO [RS:1;jenkins-hbase4:33179] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-02 14:57:31,457 DEBUG [RS:1;jenkins-hbase4:33179] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:57:31,457 DEBUG [RS:1;jenkins-hbase4:33179] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:57:31,457 DEBUG [RS:1;jenkins-hbase4:33179] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:57:31,457 DEBUG [RS:1;jenkins-hbase4:33179] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:57:31,458 DEBUG [RS:1;jenkins-hbase4:33179] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:57:31,458 DEBUG [RS:1;jenkins-hbase4:33179] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-02 14:57:31,458 DEBUG [RS:1;jenkins-hbase4:33179] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:57:31,458 DEBUG [RS:1;jenkins-hbase4:33179] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:57:31,458 DEBUG [RS:1;jenkins-hbase4:33179] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:57:31,458 DEBUG [RS:1;jenkins-hbase4:33179] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:57:31,459 INFO [RS:1;jenkins-hbase4:33179] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-02 14:57:31,459 INFO [RS:1;jenkins-hbase4:33179] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-02 14:57:31,459 INFO [RS:1;jenkins-hbase4:33179] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-02 14:57:31,475 INFO [RS:1;jenkins-hbase4:33179] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-02 14:57:31,476 INFO [RS:1;jenkins-hbase4:33179] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33179,1685717851395-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-02 14:57:31,492 INFO [RS:1;jenkins-hbase4:33179] regionserver.Replication(203): jenkins-hbase4.apache.org,33179,1685717851395 started 2023-06-02 14:57:31,492 INFO [RS:1;jenkins-hbase4:33179] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33179,1685717851395, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33179, sessionid=0x1008c0b9b2a0005 2023-06-02 14:57:31,492 INFO [Listener at localhost/46485] hbase.HBaseTestingUtility(3254): Started new server=Thread[RS:1;jenkins-hbase4:33179,5,FailOnTimeoutGroup] 2023-06-02 14:57:31,492 DEBUG [RS:1;jenkins-hbase4:33179] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-02 14:57:31,493 INFO [Listener at localhost/46485] wal.TestLogRolling(323): Replication=2 2023-06-02 14:57:31,493 DEBUG [RS:1;jenkins-hbase4:33179] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33179,1685717851395 2023-06-02 14:57:31,493 DEBUG [RS:1;jenkins-hbase4:33179] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33179,1685717851395' 2023-06-02 14:57:31,494 DEBUG [RS:1;jenkins-hbase4:33179] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-02 14:57:31,494 DEBUG [RS:1;jenkins-hbase4:33179] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-02 14:57:31,495 DEBUG [Listener at localhost/46485] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-02 14:57:31,496 DEBUG [RS:1;jenkins-hbase4:33179] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-02 14:57:31,496 DEBUG [RS:1;jenkins-hbase4:33179] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-02 14:57:31,496 DEBUG [RS:1;jenkins-hbase4:33179] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33179,1685717851395 2023-06-02 14:57:31,496 DEBUG [RS:1;jenkins-hbase4:33179] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33179,1685717851395' 2023-06-02 14:57:31,496 DEBUG [RS:1;jenkins-hbase4:33179] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-02 14:57:31,497 DEBUG [RS:1;jenkins-hbase4:33179] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-02 14:57:31,498 DEBUG [RS:1;jenkins-hbase4:33179] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-02 14:57:31,498 INFO [RS:1;jenkins-hbase4:33179] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-02 14:57:31,499 INFO [RS:1;jenkins-hbase4:33179] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-02 14:57:31,500 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60630, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-02 14:57:31,502 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39103] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-02 14:57:31,502 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39103] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-02 14:57:31,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39103] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-02 14:57:31,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39103] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath 2023-06-02 14:57:31,507 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_PRE_OPERATION 2023-06-02 14:57:31,507 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39103] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnDatanodeDeath" procId is: 9 2023-06-02 14:57:31,508 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-02 14:57:31,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39103] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-02 14:57:31,512 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5f08ebfb8236ab0caffaba761c784ebc 2023-06-02 14:57:31,512 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5f08ebfb8236ab0caffaba761c784ebc empty. 2023-06-02 14:57:31,513 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5f08ebfb8236ab0caffaba761c784ebc 2023-06-02 14:57:31,513 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnDatanodeDeath regions 2023-06-02 14:57:31,526 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/.tabledesc/.tableinfo.0000000001 2023-06-02 14:57:31,528 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5f08ebfb8236ab0caffaba761c784ebc, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/.tmp 2023-06-02 14:57:31,541 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:57:31,541 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1604): Closing 5f08ebfb8236ab0caffaba761c784ebc, disabling compactions & flushes 2023-06-02 14:57:31,541 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc. 2023-06-02 14:57:31,541 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc. 2023-06-02 14:57:31,541 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc. after waiting 0 ms 2023-06-02 14:57:31,541 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc. 2023-06-02 14:57:31,541 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc. 2023-06-02 14:57:31,542 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1558): Region close journal for 5f08ebfb8236ab0caffaba761c784ebc: 2023-06-02 14:57:31,545 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ADD_TO_META 2023-06-02 14:57:31,546 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685717851546"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685717851546"}]},"ts":"1685717851546"} 2023-06-02 14:57:31,548 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-02 14:57:31,549 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-02 14:57:31,550 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685717851550"}]},"ts":"1685717851550"} 2023-06-02 14:57:31,551 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLING in hbase:meta 2023-06-02 14:57:31,559 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-06-02 14:57:31,561 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-06-02 14:57:31,561 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-06-02 14:57:31,561 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-06-02 14:57:31,562 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=5f08ebfb8236ab0caffaba761c784ebc, ASSIGN}] 2023-06-02 14:57:31,564 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=5f08ebfb8236ab0caffaba761c784ebc, ASSIGN 2023-06-02 14:57:31,565 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=5f08ebfb8236ab0caffaba761c784ebc, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36927,1685717850240; forceNewPlan=false, retain=false 2023-06-02 14:57:31,602 INFO [RS:1;jenkins-hbase4:33179] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33179%2C1685717851395, suffix=, logDir=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,33179,1685717851395, archiveDir=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/oldWALs, maxLogs=32 2023-06-02 14:57:31,626 INFO [RS:1;jenkins-hbase4:33179] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,33179,1685717851395/jenkins-hbase4.apache.org%2C33179%2C1685717851395.1685717851603 2023-06-02 14:57:31,626 DEBUG [RS:1;jenkins-hbase4:33179] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK], DatanodeInfoWithStorage[127.0.0.1:39667,DS-557da12f-a58a-4e52-8aaf-2a570ccab906,DISK]] 2023-06-02 14:57:31,717 INFO [jenkins-hbase4:39103] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-06-02 14:57:31,718 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=5f08ebfb8236ab0caffaba761c784ebc, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36927,1685717850240 2023-06-02 14:57:31,719 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685717851718"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685717851718"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685717851718"}]},"ts":"1685717851718"} 2023-06-02 14:57:31,722 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 5f08ebfb8236ab0caffaba761c784ebc, server=jenkins-hbase4.apache.org,36927,1685717850240}] 2023-06-02 14:57:31,880 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc. 2023-06-02 14:57:31,880 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5f08ebfb8236ab0caffaba761c784ebc, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc.', STARTKEY => '', ENDKEY => ''} 2023-06-02 14:57:31,881 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnDatanodeDeath 5f08ebfb8236ab0caffaba761c784ebc 2023-06-02 14:57:31,881 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:57:31,881 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5f08ebfb8236ab0caffaba761c784ebc 2023-06-02 14:57:31,881 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5f08ebfb8236ab0caffaba761c784ebc 2023-06-02 14:57:31,883 INFO [StoreOpener-5f08ebfb8236ab0caffaba761c784ebc-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 5f08ebfb8236ab0caffaba761c784ebc 2023-06-02 14:57:31,884 DEBUG [StoreOpener-5f08ebfb8236ab0caffaba761c784ebc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5f08ebfb8236ab0caffaba761c784ebc/info 2023-06-02 14:57:31,884 DEBUG [StoreOpener-5f08ebfb8236ab0caffaba761c784ebc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5f08ebfb8236ab0caffaba761c784ebc/info 2023-06-02 14:57:31,885 INFO [StoreOpener-5f08ebfb8236ab0caffaba761c784ebc-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5f08ebfb8236ab0caffaba761c784ebc columnFamilyName info 2023-06-02 14:57:31,885 INFO [StoreOpener-5f08ebfb8236ab0caffaba761c784ebc-1] regionserver.HStore(310): Store=5f08ebfb8236ab0caffaba761c784ebc/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:57:31,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5f08ebfb8236ab0caffaba761c784ebc 2023-06-02 14:57:31,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5f08ebfb8236ab0caffaba761c784ebc 2023-06-02 14:57:31,892 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5f08ebfb8236ab0caffaba761c784ebc 2023-06-02 14:57:31,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5f08ebfb8236ab0caffaba761c784ebc/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-02 14:57:31,895 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5f08ebfb8236ab0caffaba761c784ebc; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=838776, jitterRate=0.0665597915649414}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-02 14:57:31,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5f08ebfb8236ab0caffaba761c784ebc: 2023-06-02 14:57:31,896 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc., pid=11, masterSystemTime=1685717851875 2023-06-02 14:57:31,898 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc. 2023-06-02 14:57:31,898 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc. 2023-06-02 14:57:31,899 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=5f08ebfb8236ab0caffaba761c784ebc, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36927,1685717850240 2023-06-02 14:57:31,900 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685717851899"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685717851899"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685717851899"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685717851899"}]},"ts":"1685717851899"} 2023-06-02 14:57:31,904 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-02 14:57:31,905 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 5f08ebfb8236ab0caffaba761c784ebc, server=jenkins-hbase4.apache.org,36927,1685717850240 in 180 msec 2023-06-02 14:57:31,907 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-02 14:57:31,908 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=5f08ebfb8236ab0caffaba761c784ebc, ASSIGN in 343 msec 2023-06-02 14:57:31,909 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-02 14:57:31,909 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685717851909"}]},"ts":"1685717851909"} 2023-06-02 14:57:31,911 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLED in hbase:meta 2023-06-02 14:57:31,914 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_POST_OPERATION 2023-06-02 14:57:31,916 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath in 412 msec 2023-06-02 14:57:34,296 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-02 14:57:36,507 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-02 14:57:36,508 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-02 14:57:36,509 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnDatanodeDeath' 2023-06-02 14:57:41,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39103] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-02 14:57:41,510 INFO [Listener at localhost/46485] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnDatanodeDeath, procId: 9 completed 2023-06-02 14:57:41,513 DEBUG [Listener at localhost/46485] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnDatanodeDeath 2023-06-02 14:57:41,513 DEBUG [Listener at localhost/46485] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc. 2023-06-02 14:57:41,526 WARN [Listener at localhost/46485] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-02 14:57:41,528 WARN [Listener at localhost/46485] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 14:57:41,530 INFO [Listener at localhost/46485] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 14:57:41,535 INFO [Listener at localhost/46485] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/java.io.tmpdir/Jetty_localhost_35505_datanode____.m38pfl/webapp 2023-06-02 14:57:41,627 INFO [Listener at localhost/46485] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35505 2023-06-02 14:57:41,638 WARN [Listener at localhost/36153] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 14:57:41,656 WARN [Listener at localhost/36153] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-02 14:57:41,659 WARN [Listener at localhost/36153] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 14:57:41,660 INFO [Listener at localhost/36153] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 14:57:41,665 INFO [Listener at localhost/36153] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/java.io.tmpdir/Jetty_localhost_41319_datanode____rgkyjp/webapp 2023-06-02 14:57:41,741 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x39f0dee119443007: Processing first storage report for DS-1038c47d-9da1-4dfc-9022-e17a67ac87dc from datanode 35829411-93da-4cb9-979d-8b57532bdf99 2023-06-02 14:57:41,741 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x39f0dee119443007: from storage DS-1038c47d-9da1-4dfc-9022-e17a67ac87dc node DatanodeRegistration(127.0.0.1:46155, datanodeUuid=35829411-93da-4cb9-979d-8b57532bdf99, infoPort=41595, infoSecurePort=0, ipcPort=36153, storageInfo=lv=-57;cid=testClusterID;nsid=1368835467;c=1685717849563), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-02 14:57:41,742 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x39f0dee119443007: Processing first storage report for DS-55592193-604d-4d34-b144-e8c75c9b1151 from datanode 35829411-93da-4cb9-979d-8b57532bdf99 2023-06-02 14:57:41,742 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x39f0dee119443007: from storage DS-55592193-604d-4d34-b144-e8c75c9b1151 node DatanodeRegistration(127.0.0.1:46155, datanodeUuid=35829411-93da-4cb9-979d-8b57532bdf99, infoPort=41595, infoSecurePort=0, ipcPort=36153, storageInfo=lv=-57;cid=testClusterID;nsid=1368835467;c=1685717849563), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:57:41,770 INFO [Listener at localhost/36153] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41319 2023-06-02 14:57:41,778 WARN [Listener at localhost/45535] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 14:57:41,851 WARN [Listener at localhost/45535] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-02 14:57:41,856 WARN [Listener at localhost/45535] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 14:57:41,857 INFO [Listener at localhost/45535] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 14:57:41,860 INFO [Listener at localhost/45535] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/java.io.tmpdir/Jetty_localhost_38417_datanode____.jdxfwk/webapp 2023-06-02 14:57:41,929 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xeebb1f15e773f06d: Processing first storage report for DS-0214b404-4ad1-4815-b252-e2f9d08a51aa from datanode 700ca185-bc36-4ceb-bafe-e46db925f196 2023-06-02 14:57:41,929 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xeebb1f15e773f06d: from storage DS-0214b404-4ad1-4815-b252-e2f9d08a51aa node DatanodeRegistration(127.0.0.1:38517, datanodeUuid=700ca185-bc36-4ceb-bafe-e46db925f196, infoPort=42697, infoSecurePort=0, ipcPort=45535, storageInfo=lv=-57;cid=testClusterID;nsid=1368835467;c=1685717849563), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:57:41,929 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xeebb1f15e773f06d: Processing first storage report for DS-c5875241-4c48-467f-85a3-b9e795fcf445 from datanode 700ca185-bc36-4ceb-bafe-e46db925f196 2023-06-02 14:57:41,929 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xeebb1f15e773f06d: from storage DS-c5875241-4c48-467f-85a3-b9e795fcf445 node DatanodeRegistration(127.0.0.1:38517, datanodeUuid=700ca185-bc36-4ceb-bafe-e46db925f196, infoPort=42697, infoSecurePort=0, ipcPort=45535, storageInfo=lv=-57;cid=testClusterID;nsid=1368835467;c=1685717849563), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:57:41,961 INFO [Listener at localhost/45535] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38417 2023-06-02 14:57:41,971 WARN [Listener at localhost/41307] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 14:57:42,064 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xaa7d74df34b4f84c: Processing first storage report for DS-31144227-a0cb-4645-a5fc-cf0379c94948 from datanode bf903c2a-eebf-4615-a139-4613a5466680 2023-06-02 14:57:42,064 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xaa7d74df34b4f84c: from storage DS-31144227-a0cb-4645-a5fc-cf0379c94948 node DatanodeRegistration(127.0.0.1:36483, datanodeUuid=bf903c2a-eebf-4615-a139-4613a5466680, infoPort=44451, infoSecurePort=0, ipcPort=41307, storageInfo=lv=-57;cid=testClusterID;nsid=1368835467;c=1685717849563), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:57:42,064 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xaa7d74df34b4f84c: Processing first storage report for DS-c767dff8-9796-4a09-94ae-4565789d699f from datanode bf903c2a-eebf-4615-a139-4613a5466680 2023-06-02 14:57:42,064 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xaa7d74df34b4f84c: from storage DS-c767dff8-9796-4a09-94ae-4565789d699f node DatanodeRegistration(127.0.0.1:36483, datanodeUuid=bf903c2a-eebf-4615-a139-4613a5466680, infoPort=44451, infoSecurePort=0, ipcPort=41307, storageInfo=lv=-57;cid=testClusterID;nsid=1368835467;c=1685717849563), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:57:42,078 WARN [Listener at localhost/41307] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-02 14:57:42,079 WARN [ResponseProcessor for block BP-527177821-172.31.14.131-1685717849563:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-527177821-172.31.14.131-1685717849563:blk_1073741833_1009 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-02 14:57:42,081 WARN [ResponseProcessor for block BP-527177821-172.31.14.131-1685717849563:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-527177821-172.31.14.131-1685717849563:blk_1073741832_1008 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-02 14:57:42,084 WARN [DataStreamer for file /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.meta.1685717850796.meta block BP-527177821-172.31.14.131-1685717849563:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-527177821-172.31.14.131-1685717849563:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:39667,DS-557da12f-a58a-4e52-8aaf-2a570ccab906,DISK], DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:39667,DS-557da12f-a58a-4e52-8aaf-2a570ccab906,DISK]) is bad. 2023-06-02 14:57:42,083 WARN [ResponseProcessor for block BP-527177821-172.31.14.131-1685717849563:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-527177821-172.31.14.131-1685717849563:blk_1073741829_1005 java.io.IOException: Bad response ERROR for BP-527177821-172.31.14.131-1685717849563:blk_1073741829_1005 from datanode DatanodeInfoWithStorage[127.0.0.1:39667,DS-557da12f-a58a-4e52-8aaf-2a570ccab906,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-02 14:57:42,085 WARN [DataStreamer for file /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/MasterData/WALs/jenkins-hbase4.apache.org,39103,1685717850177/jenkins-hbase4.apache.org%2C39103%2C1685717850177.1685717850343 block BP-527177821-172.31.14.131-1685717849563:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-527177821-172.31.14.131-1685717849563:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK], DatanodeInfoWithStorage[127.0.0.1:39667,DS-557da12f-a58a-4e52-8aaf-2a570ccab906,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:39667,DS-557da12f-a58a-4e52-8aaf-2a570ccab906,DISK]) is bad. 2023-06-02 14:57:42,085 WARN [PacketResponder: BP-527177821-172.31.14.131-1685717849563:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:39667]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:42,083 WARN [ResponseProcessor for block BP-527177821-172.31.14.131-1685717849563:blk_1073741838_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-527177821-172.31.14.131-1685717849563:blk_1073741838_1014 java.io.IOException: Bad response ERROR for BP-527177821-172.31.14.131-1685717849563:blk_1073741838_1014 from datanode DatanodeInfoWithStorage[127.0.0.1:39667,DS-557da12f-a58a-4e52-8aaf-2a570ccab906,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-02 14:57:42,084 WARN [DataStreamer for file /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717850642 block BP-527177821-172.31.14.131-1685717849563:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-527177821-172.31.14.131-1685717849563:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:39667,DS-557da12f-a58a-4e52-8aaf-2a570ccab906,DISK], DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:39667,DS-557da12f-a58a-4e52-8aaf-2a570ccab906,DISK]) is bad. 2023-06-02 14:57:42,091 WARN [DataStreamer for file /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,33179,1685717851395/jenkins-hbase4.apache.org%2C33179%2C1685717851395.1685717851603 block BP-527177821-172.31.14.131-1685717849563:blk_1073741838_1014] hdfs.DataStreamer(1548): Error Recovery for BP-527177821-172.31.14.131-1685717849563:blk_1073741838_1014 in pipeline [DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK], DatanodeInfoWithStorage[127.0.0.1:39667,DS-557da12f-a58a-4e52-8aaf-2a570ccab906,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:39667,DS-557da12f-a58a-4e52-8aaf-2a570ccab906,DISK]) is bad. 2023-06-02 14:57:42,093 WARN [PacketResponder: BP-527177821-172.31.14.131-1685717849563:blk_1073741838_1014, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:39667]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:42,099 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_633706408_17 at /127.0.0.1:59990 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:34867:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:59990 dst: /127.0.0.1:34867 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:42,107 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-800285370_17 at /127.0.0.1:60026 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:34867:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:60026 dst: /127.0.0.1:34867 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:34867 remote=/127.0.0.1:60026]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:42,107 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1503585066_17 at /127.0.0.1:60074 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:34867:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:60074 dst: /127.0.0.1:34867 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:42,107 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-800285370_17 at /127.0.0.1:60032 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:34867:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:60032 dst: /127.0.0.1:34867 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:34867 remote=/127.0.0.1:60032]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:42,107 INFO [Listener at localhost/41307] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 14:57:42,109 WARN [PacketResponder: BP-527177821-172.31.14.131-1685717849563:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:34867]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:42,110 WARN [PacketResponder: BP-527177821-172.31.14.131-1685717849563:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:34867]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:42,115 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-800285370_17 at /127.0.0.1:44274 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:39667:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:44274 dst: /127.0.0.1:39667 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:42,115 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-800285370_17 at /127.0.0.1:44278 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:39667:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:44278 dst: /127.0.0.1:39667 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:42,130 WARN [BP-527177821-172.31.14.131-1685717849563 heartbeating to localhost/127.0.0.1:34397] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-527177821-172.31.14.131-1685717849563 (Datanode Uuid ae50a0db-af61-48db-96b8-ea952100721c) service to localhost/127.0.0.1:34397 2023-06-02 14:57:42,131 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data3/current/BP-527177821-172.31.14.131-1685717849563] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:57:42,131 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data4/current/BP-527177821-172.31.14.131-1685717849563] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:57:42,215 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_633706408_17 at /127.0.0.1:44248 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:39667:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:44248 dst: /127.0.0.1:39667 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:42,215 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1503585066_17 at /127.0.0.1:44324 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:39667:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:44324 dst: /127.0.0.1:39667 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:42,217 WARN [Listener at localhost/41307] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-02 14:57:42,217 WARN [ResponseProcessor for block BP-527177821-172.31.14.131-1685717849563:blk_1073741829_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-527177821-172.31.14.131-1685717849563:blk_1073741829_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-02 14:57:42,218 WARN [ResponseProcessor for block BP-527177821-172.31.14.131-1685717849563:blk_1073741833_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-527177821-172.31.14.131-1685717849563:blk_1073741833_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-02 14:57:42,218 WARN [ResponseProcessor for block BP-527177821-172.31.14.131-1685717849563:blk_1073741838_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-527177821-172.31.14.131-1685717849563:blk_1073741838_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-02 14:57:42,217 WARN [ResponseProcessor for block BP-527177821-172.31.14.131-1685717849563:blk_1073741832_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-527177821-172.31.14.131-1685717849563:blk_1073741832_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-02 14:57:42,225 INFO [Listener at localhost/41307] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 14:57:42,328 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-800285370_17 at /127.0.0.1:36730 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:34867:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:36730 dst: /127.0.0.1:34867 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:42,329 WARN [BP-527177821-172.31.14.131-1685717849563 heartbeating to localhost/127.0.0.1:34397] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-02 14:57:42,329 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1503585066_17 at /127.0.0.1:36734 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:34867:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:36734 dst: /127.0.0.1:34867 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:42,329 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-800285370_17 at /127.0.0.1:36714 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:34867:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:36714 dst: /127.0.0.1:34867 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:42,328 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_633706408_17 at /127.0.0.1:36700 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:34867:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:36700 dst: /127.0.0.1:34867 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:42,330 WARN [BP-527177821-172.31.14.131-1685717849563 heartbeating to localhost/127.0.0.1:34397] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-527177821-172.31.14.131-1685717849563 (Datanode Uuid a9cb3d3e-740a-4e49-b4c0-a95b05c6ad71) service to localhost/127.0.0.1:34397 2023-06-02 14:57:42,332 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data1/current/BP-527177821-172.31.14.131-1685717849563] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:57:42,332 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data2/current/BP-527177821-172.31.14.131-1685717849563] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:57:42,338 WARN [RS:0;jenkins-hbase4:36927.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=4, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:57:42,338 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C36927%2C1685717850240:(num 1685717850642) roll requested 2023-06-02 14:57:42,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36927] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:57:42,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36927] ipc.CallRunner(144): callId: 9 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:35924 deadline: 1685717872337, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-06-02 14:57:42,351 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-06-02 14:57:42,352 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717850642 with entries=4, filesize=983 B; new WAL /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717862338 2023-06-02 14:57:42,354 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38517,DS-0214b404-4ad1-4815-b252-e2f9d08a51aa,DISK], DatanodeInfoWithStorage[127.0.0.1:46155,DS-1038c47d-9da1-4dfc-9022-e17a67ac87dc,DISK]] 2023-06-02 14:57:42,355 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:57:42,355 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717850642 is not closed yet, will try archiving it next time 2023-06-02 14:57:42,355 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717850642; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:57:54,425 INFO [Listener at localhost/41307] wal.TestLogRolling(375): log.getCurrentFileName(): hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717862338 2023-06-02 14:57:54,426 WARN [Listener at localhost/41307] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-02 14:57:54,427 WARN [ResponseProcessor for block BP-527177821-172.31.14.131-1685717849563:blk_1073741839_1019] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-527177821-172.31.14.131-1685717849563:blk_1073741839_1019 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-02 14:57:54,427 WARN [DataStreamer for file /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717862338 block BP-527177821-172.31.14.131-1685717849563:blk_1073741839_1019] hdfs.DataStreamer(1548): Error Recovery for BP-527177821-172.31.14.131-1685717849563:blk_1073741839_1019 in pipeline [DatanodeInfoWithStorage[127.0.0.1:38517,DS-0214b404-4ad1-4815-b252-e2f9d08a51aa,DISK], DatanodeInfoWithStorage[127.0.0.1:46155,DS-1038c47d-9da1-4dfc-9022-e17a67ac87dc,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:38517,DS-0214b404-4ad1-4815-b252-e2f9d08a51aa,DISK]) is bad. 2023-06-02 14:57:54,432 INFO [Listener at localhost/41307] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 14:57:54,433 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-800285370_17 at /127.0.0.1:41954 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741839_1019]] datanode.DataXceiver(323): 127.0.0.1:46155:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:41954 dst: /127.0.0.1:46155 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:46155 remote=/127.0.0.1:41954]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:54,434 WARN [PacketResponder: BP-527177821-172.31.14.131-1685717849563:blk_1073741839_1019, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:46155]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:54,435 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-800285370_17 at /127.0.0.1:51822 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741839_1019]] datanode.DataXceiver(323): 127.0.0.1:38517:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:51822 dst: /127.0.0.1:38517 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:54,537 WARN [BP-527177821-172.31.14.131-1685717849563 heartbeating to localhost/127.0.0.1:34397] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-02 14:57:54,537 WARN [BP-527177821-172.31.14.131-1685717849563 heartbeating to localhost/127.0.0.1:34397] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-527177821-172.31.14.131-1685717849563 (Datanode Uuid 700ca185-bc36-4ceb-bafe-e46db925f196) service to localhost/127.0.0.1:34397 2023-06-02 14:57:54,538 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data7/current/BP-527177821-172.31.14.131-1685717849563] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:57:54,538 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data8/current/BP-527177821-172.31.14.131-1685717849563] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:57:54,543 WARN [sync.1] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:46155,DS-1038c47d-9da1-4dfc-9022-e17a67ac87dc,DISK]] 2023-06-02 14:57:54,543 WARN [sync.1] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:46155,DS-1038c47d-9da1-4dfc-9022-e17a67ac87dc,DISK]] 2023-06-02 14:57:54,543 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C36927%2C1685717850240:(num 1685717862338) roll requested 2023-06-02 14:57:54,547 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-800285370_17 at /127.0.0.1:35930 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741840_1021]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data10/current]'}, localName='127.0.0.1:36483', datanodeUuid='bf903c2a-eebf-4615-a139-4613a5466680', xmitsInProgress=0}:Exception transfering block BP-527177821-172.31.14.131-1685717849563:blk_1073741840_1021 to mirror 127.0.0.1:38517: java.net.ConnectException: Connection refused 2023-06-02 14:57:54,547 WARN [Thread-638] hdfs.DataStreamer(1658): Abandoning BP-527177821-172.31.14.131-1685717849563:blk_1073741840_1021 2023-06-02 14:57:54,547 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-800285370_17 at /127.0.0.1:35930 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741840_1021]] datanode.DataXceiver(323): 127.0.0.1:36483:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35930 dst: /127.0.0.1:36483 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:54,550 WARN [Thread-638] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:38517,DS-0214b404-4ad1-4815-b252-e2f9d08a51aa,DISK] 2023-06-02 14:57:54,554 WARN [Thread-638] hdfs.DataStreamer(1658): Abandoning BP-527177821-172.31.14.131-1685717849563:blk_1073741841_1022 2023-06-02 14:57:54,554 WARN [Thread-638] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK] 2023-06-02 14:57:54,557 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-800285370_17 at /127.0.0.1:37300 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741842_1023]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data5/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data6/current]'}, localName='127.0.0.1:46155', datanodeUuid='35829411-93da-4cb9-979d-8b57532bdf99', xmitsInProgress=0}:Exception transfering block BP-527177821-172.31.14.131-1685717849563:blk_1073741842_1023 to mirror 127.0.0.1:39667: java.net.ConnectException: Connection refused 2023-06-02 14:57:54,557 WARN [Thread-638] hdfs.DataStreamer(1658): Abandoning BP-527177821-172.31.14.131-1685717849563:blk_1073741842_1023 2023-06-02 14:57:54,557 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-800285370_17 at /127.0.0.1:37300 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741842_1023]] datanode.DataXceiver(323): 127.0.0.1:46155:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37300 dst: /127.0.0.1:46155 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:54,558 WARN [Thread-638] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39667,DS-557da12f-a58a-4e52-8aaf-2a570ccab906,DISK] 2023-06-02 14:57:54,563 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717862338 with entries=2, filesize=2.36 KB; new WAL /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717874543 2023-06-02 14:57:54,563 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46155,DS-1038c47d-9da1-4dfc-9022-e17a67ac87dc,DISK], DatanodeInfoWithStorage[127.0.0.1:36483,DS-31144227-a0cb-4645-a5fc-cf0379c94948,DISK]] 2023-06-02 14:57:54,563 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717862338 is not closed yet, will try archiving it next time 2023-06-02 14:57:57,747 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@8d7157d] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:46155, datanodeUuid=35829411-93da-4cb9-979d-8b57532bdf99, infoPort=41595, infoSecurePort=0, ipcPort=36153, storageInfo=lv=-57;cid=testClusterID;nsid=1368835467;c=1685717849563):Failed to transfer BP-527177821-172.31.14.131-1685717849563:blk_1073741839_1020 to 127.0.0.1:34867 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:58,548 WARN [Listener at localhost/41307] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-02 14:57:58,550 WARN [ResponseProcessor for block BP-527177821-172.31.14.131-1685717849563:blk_1073741843_1024] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-527177821-172.31.14.131-1685717849563:blk_1073741843_1024 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-02 14:57:58,551 WARN [DataStreamer for file /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717874543 block BP-527177821-172.31.14.131-1685717849563:blk_1073741843_1024] hdfs.DataStreamer(1548): Error Recovery for BP-527177821-172.31.14.131-1685717849563:blk_1073741843_1024 in pipeline [DatanodeInfoWithStorage[127.0.0.1:46155,DS-1038c47d-9da1-4dfc-9022-e17a67ac87dc,DISK], DatanodeInfoWithStorage[127.0.0.1:36483,DS-31144227-a0cb-4645-a5fc-cf0379c94948,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:46155,DS-1038c47d-9da1-4dfc-9022-e17a67ac87dc,DISK]) is bad. 2023-06-02 14:57:58,554 INFO [Listener at localhost/41307] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 14:57:58,554 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-800285370_17 at /127.0.0.1:35940 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741843_1024]] datanode.DataXceiver(323): 127.0.0.1:36483:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35940 dst: /127.0.0.1:36483 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:36483 remote=/127.0.0.1:35940]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:58,555 WARN [PacketResponder: BP-527177821-172.31.14.131-1685717849563:blk_1073741843_1024, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:36483]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:58,556 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-800285370_17 at /127.0.0.1:37316 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741843_1024]] datanode.DataXceiver(323): 127.0.0.1:46155:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37316 dst: /127.0.0.1:46155 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:58,659 WARN [BP-527177821-172.31.14.131-1685717849563 heartbeating to localhost/127.0.0.1:34397] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-02 14:57:58,659 WARN [BP-527177821-172.31.14.131-1685717849563 heartbeating to localhost/127.0.0.1:34397] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-527177821-172.31.14.131-1685717849563 (Datanode Uuid 35829411-93da-4cb9-979d-8b57532bdf99) service to localhost/127.0.0.1:34397 2023-06-02 14:57:58,660 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data5/current/BP-527177821-172.31.14.131-1685717849563] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:57:58,660 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data6/current/BP-527177821-172.31.14.131-1685717849563] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:57:58,665 WARN [sync.4] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:36483,DS-31144227-a0cb-4645-a5fc-cf0379c94948,DISK]] 2023-06-02 14:57:58,665 WARN [sync.4] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:36483,DS-31144227-a0cb-4645-a5fc-cf0379c94948,DISK]] 2023-06-02 14:57:58,665 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C36927%2C1685717850240:(num 1685717874543) roll requested 2023-06-02 14:57:58,668 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-527177821-172.31.14.131-1685717849563:blk_1073741844_1026 2023-06-02 14:57:58,669 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39667,DS-557da12f-a58a-4e52-8aaf-2a570ccab906,DISK] 2023-06-02 14:57:58,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36927] regionserver.HRegion(9158): Flush requested on 5f08ebfb8236ab0caffaba761c784ebc 2023-06-02 14:57:58,670 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5f08ebfb8236ab0caffaba761c784ebc 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-02 14:57:58,672 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-800285370_17 at /127.0.0.1:35946 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741845_1027]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data10/current]'}, localName='127.0.0.1:36483', datanodeUuid='bf903c2a-eebf-4615-a139-4613a5466680', xmitsInProgress=0}:Exception transfering block BP-527177821-172.31.14.131-1685717849563:blk_1073741845_1027 to mirror 127.0.0.1:38517: java.net.ConnectException: Connection refused 2023-06-02 14:57:58,672 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-527177821-172.31.14.131-1685717849563:blk_1073741845_1027 2023-06-02 14:57:58,672 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-800285370_17 at /127.0.0.1:35946 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741845_1027]] datanode.DataXceiver(323): 127.0.0.1:36483:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35946 dst: /127.0.0.1:36483 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:58,673 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:38517,DS-0214b404-4ad1-4815-b252-e2f9d08a51aa,DISK] 2023-06-02 14:57:58,676 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-527177821-172.31.14.131-1685717849563:blk_1073741846_1028 2023-06-02 14:57:58,677 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK] 2023-06-02 14:57:58,678 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-527177821-172.31.14.131-1685717849563:blk_1073741847_1029 2023-06-02 14:57:58,678 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46155,DS-1038c47d-9da1-4dfc-9022-e17a67ac87dc,DISK] 2023-06-02 14:57:58,679 WARN [IPC Server handler 0 on default port 34397] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-06-02 14:57:58,679 WARN [Thread-654] hdfs.DataStreamer(1658): Abandoning BP-527177821-172.31.14.131-1685717849563:blk_1073741848_1030 2023-06-02 14:57:58,679 WARN [IPC Server handler 0 on default port 34397] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-06-02 14:57:58,679 WARN [IPC Server handler 0 on default port 34397] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-06-02 14:57:58,680 WARN [Thread-654] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK] 2023-06-02 14:57:58,682 WARN [Thread-654] hdfs.DataStreamer(1658): Abandoning BP-527177821-172.31.14.131-1685717849563:blk_1073741850_1032 2023-06-02 14:57:58,682 WARN [Thread-654] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:38517,DS-0214b404-4ad1-4815-b252-e2f9d08a51aa,DISK] 2023-06-02 14:57:58,683 WARN [Thread-654] hdfs.DataStreamer(1658): Abandoning BP-527177821-172.31.14.131-1685717849563:blk_1073741851_1033 2023-06-02 14:57:58,684 WARN [Thread-654] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46155,DS-1038c47d-9da1-4dfc-9022-e17a67ac87dc,DISK] 2023-06-02 14:57:58,692 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-800285370_17 at /127.0.0.1:35978 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741852_1034]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data10/current]'}, localName='127.0.0.1:36483', datanodeUuid='bf903c2a-eebf-4615-a139-4613a5466680', xmitsInProgress=0}:Exception transfering block BP-527177821-172.31.14.131-1685717849563:blk_1073741852_1034 to mirror 127.0.0.1:39667: java.net.ConnectException: Connection refused 2023-06-02 14:57:58,692 WARN [Thread-654] hdfs.DataStreamer(1658): Abandoning BP-527177821-172.31.14.131-1685717849563:blk_1073741852_1034 2023-06-02 14:57:58,692 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-800285370_17 at /127.0.0.1:35978 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741852_1034]] datanode.DataXceiver(323): 127.0.0.1:36483:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35978 dst: /127.0.0.1:36483 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:58,692 WARN [Thread-654] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39667,DS-557da12f-a58a-4e52-8aaf-2a570ccab906,DISK] 2023-06-02 14:57:58,693 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717874543 with entries=13, filesize=14.09 KB; new WAL /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717878665 2023-06-02 14:57:58,693 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36483,DS-31144227-a0cb-4645-a5fc-cf0379c94948,DISK]] 2023-06-02 14:57:58,693 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717874543 is not closed yet, will try archiving it next time 2023-06-02 14:57:58,693 WARN [IPC Server handler 3 on default port 34397] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-06-02 14:57:58,694 WARN [IPC Server handler 3 on default port 34397] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-06-02 14:57:58,694 WARN [IPC Server handler 3 on default port 34397] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-06-02 14:57:58,887 WARN [sync.2] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:36483,DS-31144227-a0cb-4645-a5fc-cf0379c94948,DISK]] 2023-06-02 14:57:58,887 WARN [sync.2] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:36483,DS-31144227-a0cb-4645-a5fc-cf0379c94948,DISK]] 2023-06-02 14:57:58,887 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C36927%2C1685717850240:(num 1685717878665) roll requested 2023-06-02 14:57:58,890 WARN [Thread-662] hdfs.DataStreamer(1658): Abandoning BP-527177821-172.31.14.131-1685717849563:blk_1073741854_1036 2023-06-02 14:57:58,891 WARN [Thread-662] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46155,DS-1038c47d-9da1-4dfc-9022-e17a67ac87dc,DISK] 2023-06-02 14:57:58,892 WARN [Thread-662] hdfs.DataStreamer(1658): Abandoning BP-527177821-172.31.14.131-1685717849563:blk_1073741855_1037 2023-06-02 14:57:58,892 WARN [Thread-662] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39667,DS-557da12f-a58a-4e52-8aaf-2a570ccab906,DISK] 2023-06-02 14:57:58,893 WARN [Thread-662] hdfs.DataStreamer(1658): Abandoning BP-527177821-172.31.14.131-1685717849563:blk_1073741856_1038 2023-06-02 14:57:58,894 WARN [Thread-662] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK] 2023-06-02 14:57:58,896 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-800285370_17 at /127.0.0.1:36000 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741857_1039]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data10/current]'}, localName='127.0.0.1:36483', datanodeUuid='bf903c2a-eebf-4615-a139-4613a5466680', xmitsInProgress=0}:Exception transfering block BP-527177821-172.31.14.131-1685717849563:blk_1073741857_1039 to mirror 127.0.0.1:38517: java.net.ConnectException: Connection refused 2023-06-02 14:57:58,896 WARN [Thread-662] hdfs.DataStreamer(1658): Abandoning BP-527177821-172.31.14.131-1685717849563:blk_1073741857_1039 2023-06-02 14:57:58,896 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-800285370_17 at /127.0.0.1:36000 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741857_1039]] datanode.DataXceiver(323): 127.0.0.1:36483:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:36000 dst: /127.0.0.1:36483 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:57:58,896 WARN [Thread-662] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:38517,DS-0214b404-4ad1-4815-b252-e2f9d08a51aa,DISK] 2023-06-02 14:57:58,897 WARN [IPC Server handler 1 on default port 34397] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-06-02 14:57:58,897 WARN [IPC Server handler 1 on default port 34397] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-06-02 14:57:58,897 WARN [IPC Server handler 1 on default port 34397] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-06-02 14:57:58,901 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717878665 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717878887 2023-06-02 14:57:58,902 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36483,DS-31144227-a0cb-4645-a5fc-cf0379c94948,DISK]] 2023-06-02 14:57:58,902 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717874543 is not closed yet, will try archiving it next time 2023-06-02 14:57:58,902 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717878665 is not closed yet, will try archiving it next time 2023-06-02 14:57:59,090 WARN [sync.4] wal.FSHLog(757): Too many consecutive RollWriter requests, it's a sign of the total number of live datanodes is lower than the tolerable replicas. 2023-06-02 14:57:59,096 DEBUG [Close-WAL-Writer-0] wal.AbstractFSWAL(716): hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717878665 is not closed yet, will try archiving it next time 2023-06-02 14:57:59,098 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=12 (bloomFilter=true), to=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5f08ebfb8236ab0caffaba761c784ebc/.tmp/info/6366c3d108784169aa22943d156fcf9d 2023-06-02 14:57:59,106 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5f08ebfb8236ab0caffaba761c784ebc/.tmp/info/6366c3d108784169aa22943d156fcf9d as hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5f08ebfb8236ab0caffaba761c784ebc/info/6366c3d108784169aa22943d156fcf9d 2023-06-02 14:57:59,112 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5f08ebfb8236ab0caffaba761c784ebc/info/6366c3d108784169aa22943d156fcf9d, entries=5, sequenceid=12, filesize=10.0 K 2023-06-02 14:57:59,112 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=9.45 KB/9681 for 5f08ebfb8236ab0caffaba761c784ebc in 442ms, sequenceid=12, compaction requested=false 2023-06-02 14:57:59,113 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5f08ebfb8236ab0caffaba761c784ebc: 2023-06-02 14:57:59,296 WARN [Listener at localhost/41307] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-02 14:57:59,298 WARN [Listener at localhost/41307] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 14:57:59,299 INFO [Listener at localhost/41307] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 14:57:59,304 INFO [Listener at localhost/41307] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/java.io.tmpdir/Jetty_localhost_46435_datanode____cv39a3/webapp 2023-06-02 14:57:59,305 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717862338 to hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/oldWALs/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717862338 2023-06-02 14:57:59,411 INFO [Listener at localhost/41307] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46435 2023-06-02 14:57:59,419 WARN [Listener at localhost/46489] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 14:57:59,522 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x531d776adae24056: Processing first storage report for DS-557da12f-a58a-4e52-8aaf-2a570ccab906 from datanode ae50a0db-af61-48db-96b8-ea952100721c 2023-06-02 14:57:59,523 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x531d776adae24056: from storage DS-557da12f-a58a-4e52-8aaf-2a570ccab906 node DatanodeRegistration(127.0.0.1:33285, datanodeUuid=ae50a0db-af61-48db-96b8-ea952100721c, infoPort=41813, infoSecurePort=0, ipcPort=46489, storageInfo=lv=-57;cid=testClusterID;nsid=1368835467;c=1685717849563), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-02 14:57:59,523 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x531d776adae24056: Processing first storage report for DS-b69e889c-bf94-4ad2-9cb6-921f188e963d from datanode ae50a0db-af61-48db-96b8-ea952100721c 2023-06-02 14:57:59,524 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x531d776adae24056: from storage DS-b69e889c-bf94-4ad2-9cb6-921f188e963d node DatanodeRegistration(127.0.0.1:33285, datanodeUuid=ae50a0db-af61-48db-96b8-ea952100721c, infoPort=41813, infoSecurePort=0, ipcPort=46489, storageInfo=lv=-57;cid=testClusterID;nsid=1368835467;c=1685717849563), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-02 14:58:00,067 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@26e35584] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:36483, datanodeUuid=bf903c2a-eebf-4615-a139-4613a5466680, infoPort=44451, infoSecurePort=0, ipcPort=41307, storageInfo=lv=-57;cid=testClusterID;nsid=1368835467;c=1685717849563):Failed to transfer BP-527177821-172.31.14.131-1685717849563:blk_1073741843_1025 to 127.0.0.1:46155 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:00,067 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@438dfa05] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:36483, datanodeUuid=bf903c2a-eebf-4615-a139-4613a5466680, infoPort=44451, infoSecurePort=0, ipcPort=41307, storageInfo=lv=-57;cid=testClusterID;nsid=1368835467;c=1685717849563):Failed to transfer BP-527177821-172.31.14.131-1685717849563:blk_1073741853_1035 to 127.0.0.1:38517 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:00,420 WARN [master/jenkins-hbase4:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:58:00,421 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C39103%2C1685717850177:(num 1685717850343) roll requested 2023-06-02 14:58:00,425 WARN [Thread-701] hdfs.DataStreamer(1658): Abandoning BP-527177821-172.31.14.131-1685717849563:blk_1073741859_1041 2023-06-02 14:58:00,426 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:58:00,426 WARN [Thread-701] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:38517,DS-0214b404-4ad1-4815-b252-e2f9d08a51aa,DISK] 2023-06-02 14:58:00,426 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:58:00,429 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_633706408_17 at /127.0.0.1:49956 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741860_1042]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data10/current]'}, localName='127.0.0.1:36483', datanodeUuid='bf903c2a-eebf-4615-a139-4613a5466680', xmitsInProgress=0}:Exception transfering block BP-527177821-172.31.14.131-1685717849563:blk_1073741860_1042 to mirror 127.0.0.1:46155: java.net.ConnectException: Connection refused 2023-06-02 14:58:00,429 WARN [Thread-701] hdfs.DataStreamer(1658): Abandoning BP-527177821-172.31.14.131-1685717849563:blk_1073741860_1042 2023-06-02 14:58:00,429 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_633706408_17 at /127.0.0.1:49956 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741860_1042]] datanode.DataXceiver(323): 127.0.0.1:36483:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49956 dst: /127.0.0.1:36483 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:00,430 WARN [Thread-701] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46155,DS-1038c47d-9da1-4dfc-9022-e17a67ac87dc,DISK] 2023-06-02 14:58:00,436 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-06-02 14:58:00,436 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/MasterData/WALs/jenkins-hbase4.apache.org,39103,1685717850177/jenkins-hbase4.apache.org%2C39103%2C1685717850177.1685717850343 with entries=88, filesize=43.70 KB; new WAL /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/MasterData/WALs/jenkins-hbase4.apache.org,39103,1685717850177/jenkins-hbase4.apache.org%2C39103%2C1685717850177.1685717880421 2023-06-02 14:58:00,436 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36483,DS-31144227-a0cb-4645-a5fc-cf0379c94948,DISK], DatanodeInfoWithStorage[127.0.0.1:33285,DS-557da12f-a58a-4e52-8aaf-2a570ccab906,DISK]] 2023-06-02 14:58:00,436 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:58:00,437 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/MasterData/WALs/jenkins-hbase4.apache.org,39103,1685717850177/jenkins-hbase4.apache.org%2C39103%2C1685717850177.1685717850343 is not closed yet, will try archiving it next time 2023-06-02 14:58:00,437 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/MasterData/WALs/jenkins-hbase4.apache.org,39103,1685717850177/jenkins-hbase4.apache.org%2C39103%2C1685717850177.1685717850343; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:58:01,068 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@6f2fcbff] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:36483, datanodeUuid=bf903c2a-eebf-4615-a139-4613a5466680, infoPort=44451, infoSecurePort=0, ipcPort=41307, storageInfo=lv=-57;cid=testClusterID;nsid=1368835467;c=1685717849563):Failed to transfer BP-527177821-172.31.14.131-1685717849563:blk_1073741849_1031 to 127.0.0.1:46155 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:12,524 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@7364f471] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:33285, datanodeUuid=ae50a0db-af61-48db-96b8-ea952100721c, infoPort=41813, infoSecurePort=0, ipcPort=46489, storageInfo=lv=-57;cid=testClusterID;nsid=1368835467;c=1685717849563):Failed to transfer BP-527177821-172.31.14.131-1685717849563:blk_1073741837_1013 to 127.0.0.1:46155 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:12,524 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@488c0616] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:33285, datanodeUuid=ae50a0db-af61-48db-96b8-ea952100721c, infoPort=41813, infoSecurePort=0, ipcPort=46489, storageInfo=lv=-57;cid=testClusterID;nsid=1368835467;c=1685717849563):Failed to transfer BP-527177821-172.31.14.131-1685717849563:blk_1073741835_1011 to 127.0.0.1:38517 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:13,524 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@2f995d18] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:33285, datanodeUuid=ae50a0db-af61-48db-96b8-ea952100721c, infoPort=41813, infoSecurePort=0, ipcPort=46489, storageInfo=lv=-57;cid=testClusterID;nsid=1368835467;c=1685717849563):Failed to transfer BP-527177821-172.31.14.131-1685717849563:blk_1073741831_1007 to 127.0.0.1:38517 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:13,524 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@4058232f] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:33285, datanodeUuid=ae50a0db-af61-48db-96b8-ea952100721c, infoPort=41813, infoSecurePort=0, ipcPort=46489, storageInfo=lv=-57;cid=testClusterID;nsid=1368835467;c=1685717849563):Failed to transfer BP-527177821-172.31.14.131-1685717849563:blk_1073741827_1003 to 127.0.0.1:38517 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:17,767 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_633706408_17 at /127.0.0.1:56390 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741862_1044]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data3/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data4/current]'}, localName='127.0.0.1:33285', datanodeUuid='ae50a0db-af61-48db-96b8-ea952100721c', xmitsInProgress=0}:Exception transfering block BP-527177821-172.31.14.131-1685717849563:blk_1073741862_1044 to mirror 127.0.0.1:46155: java.net.ConnectException: Connection refused 2023-06-02 14:58:17,767 WARN [Thread-718] hdfs.DataStreamer(1658): Abandoning BP-527177821-172.31.14.131-1685717849563:blk_1073741862_1044 2023-06-02 14:58:17,767 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_633706408_17 at /127.0.0.1:56390 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741862_1044]] datanode.DataXceiver(323): 127.0.0.1:33285:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:56390 dst: /127.0.0.1:33285 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:17,768 WARN [Thread-718] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46155,DS-1038c47d-9da1-4dfc-9022-e17a67ac87dc,DISK] 2023-06-02 14:58:17,775 INFO [Listener at localhost/46489] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717878887 with entries=2, filesize=1.57 KB; new WAL /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717897762 2023-06-02 14:58:17,775 DEBUG [Listener at localhost/46489] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33285,DS-557da12f-a58a-4e52-8aaf-2a570ccab906,DISK], DatanodeInfoWithStorage[127.0.0.1:36483,DS-31144227-a0cb-4645-a5fc-cf0379c94948,DISK]] 2023-06-02 14:58:17,775 DEBUG [Listener at localhost/46489] wal.AbstractFSWAL(716): hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.1685717878887 is not closed yet, will try archiving it next time 2023-06-02 14:58:17,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36927] regionserver.HRegion(9158): Flush requested on 5f08ebfb8236ab0caffaba761c784ebc 2023-06-02 14:58:17,780 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5f08ebfb8236ab0caffaba761c784ebc 1/1 column families, dataSize=10.50 KB heapSize=11.50 KB 2023-06-02 14:58:17,781 INFO [sync.3] wal.FSHLog(774): LowReplication-Roller was enabled. 2023-06-02 14:58:17,787 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-800285370_17 at /127.0.0.1:56418 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741864_1046]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data3/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data4/current]'}, localName='127.0.0.1:33285', datanodeUuid='ae50a0db-af61-48db-96b8-ea952100721c', xmitsInProgress=0}:Exception transfering block BP-527177821-172.31.14.131-1685717849563:blk_1073741864_1046 to mirror 127.0.0.1:46155: java.net.ConnectException: Connection refused 2023-06-02 14:58:17,788 WARN [Thread-726] hdfs.DataStreamer(1658): Abandoning BP-527177821-172.31.14.131-1685717849563:blk_1073741864_1046 2023-06-02 14:58:17,788 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-800285370_17 at /127.0.0.1:56418 [Receiving block BP-527177821-172.31.14.131-1685717849563:blk_1073741864_1046]] datanode.DataXceiver(323): 127.0.0.1:33285:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:56418 dst: /127.0.0.1:33285 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:17,789 WARN [Thread-726] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46155,DS-1038c47d-9da1-4dfc-9022-e17a67ac87dc,DISK] 2023-06-02 14:58:17,796 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.50 KB at sequenceid=25 (bloomFilter=true), to=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5f08ebfb8236ab0caffaba761c784ebc/.tmp/info/320ceb267c8c4e0f94b3fd505557b7bd 2023-06-02 14:58:17,797 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-02 14:58:17,797 INFO [Listener at localhost/46489] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-06-02 14:58:17,797 DEBUG [Listener at localhost/46489] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6ca9c02f to 127.0.0.1:52513 2023-06-02 14:58:17,797 DEBUG [Listener at localhost/46489] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 14:58:17,797 DEBUG [Listener at localhost/46489] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-02 14:58:17,798 DEBUG [Listener at localhost/46489] util.JVMClusterUtil(257): Found active master hash=1545614088, stopped=false 2023-06-02 14:58:17,798 INFO [Listener at localhost/46489] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,39103,1685717850177 2023-06-02 14:58:17,800 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): regionserver:36927-0x1008c0b9b2a0001, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-02 14:58:17,800 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-02 14:58:17,800 INFO [Listener at localhost/46489] procedure2.ProcedureExecutor(629): Stopping 2023-06-02 14:58:17,800 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): regionserver:33179-0x1008c0b9b2a0005, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-02 14:58:17,800 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:58:17,800 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36927-0x1008c0b9b2a0001, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 14:58:17,800 DEBUG [Listener at localhost/46489] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x58a7c9a0 to 127.0.0.1:52513 2023-06-02 14:58:17,800 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 14:58:17,801 DEBUG [Listener at localhost/46489] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 14:58:17,801 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33179-0x1008c0b9b2a0005, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 14:58:17,801 INFO [Listener at localhost/46489] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,36927,1685717850240' ***** 2023-06-02 14:58:17,801 INFO [Listener at localhost/46489] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-02 14:58:17,802 INFO [Listener at localhost/46489] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,33179,1685717851395' ***** 2023-06-02 14:58:17,802 INFO [Listener at localhost/46489] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-02 14:58:17,802 INFO [RS:0;jenkins-hbase4:36927] regionserver.HeapMemoryManager(220): Stopping 2023-06-02 14:58:17,802 INFO [RS:1;jenkins-hbase4:33179] regionserver.HeapMemoryManager(220): Stopping 2023-06-02 14:58:17,802 INFO [RS:1;jenkins-hbase4:33179] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-02 14:58:17,802 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-02 14:58:17,802 INFO [RS:1;jenkins-hbase4:33179] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-02 14:58:17,802 INFO [RS:1;jenkins-hbase4:33179] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33179,1685717851395 2023-06-02 14:58:17,803 DEBUG [RS:1;jenkins-hbase4:33179] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x11d32d5c to 127.0.0.1:52513 2023-06-02 14:58:17,803 DEBUG [RS:1;jenkins-hbase4:33179] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 14:58:17,803 INFO [RS:1;jenkins-hbase4:33179] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33179,1685717851395; all regions closed. 2023-06-02 14:58:17,803 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,33179,1685717851395 2023-06-02 14:58:17,805 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:58:17,807 ERROR [RS:1;jenkins-hbase4:33179] regionserver.HRegionServer(1539): Shutdown / close of WAL failed: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK]] are bad. Aborting... 2023-06-02 14:58:17,808 DEBUG [RS:1;jenkins-hbase4:33179] regionserver.HRegionServer(1540): Shutdown / close exception details: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:58:17,808 DEBUG [RS:1;jenkins-hbase4:33179] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 14:58:17,808 INFO [RS:1;jenkins-hbase4:33179] regionserver.LeaseManager(133): Closed leases 2023-06-02 14:58:17,809 INFO [RS:1;jenkins-hbase4:33179] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-06-02 14:58:17,809 INFO [RS:1;jenkins-hbase4:33179] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-02 14:58:17,809 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-02 14:58:17,809 INFO [RS:1;jenkins-hbase4:33179] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-02 14:58:17,809 INFO [RS:1;jenkins-hbase4:33179] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-02 14:58:17,809 INFO [RS:1;jenkins-hbase4:33179] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33179 2023-06-02 14:58:17,813 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): regionserver:33179-0x1008c0b9b2a0005, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33179,1685717851395 2023-06-02 14:58:17,813 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): regionserver:36927-0x1008c0b9b2a0001, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33179,1685717851395 2023-06-02 14:58:17,813 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-02 14:58:17,813 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): regionserver:36927-0x1008c0b9b2a0001, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-02 14:58:17,813 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5f08ebfb8236ab0caffaba761c784ebc/.tmp/info/320ceb267c8c4e0f94b3fd505557b7bd as hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5f08ebfb8236ab0caffaba761c784ebc/info/320ceb267c8c4e0f94b3fd505557b7bd 2023-06-02 14:58:17,813 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): regionserver:33179-0x1008c0b9b2a0005, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-02 14:58:17,814 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33179,1685717851395] 2023-06-02 14:58:17,815 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33179,1685717851395; numProcessing=1 2023-06-02 14:58:17,817 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33179,1685717851395 already deleted, retry=false 2023-06-02 14:58:17,817 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33179,1685717851395 expired; onlineServers=1 2023-06-02 14:58:17,820 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5f08ebfb8236ab0caffaba761c784ebc/info/320ceb267c8c4e0f94b3fd505557b7bd, entries=8, sequenceid=25, filesize=13.2 K 2023-06-02 14:58:17,821 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.50 KB/10757, heapSize ~11.48 KB/11760, currentSize=9.46 KB/9684 for 5f08ebfb8236ab0caffaba761c784ebc in 41ms, sequenceid=25, compaction requested=false 2023-06-02 14:58:17,821 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5f08ebfb8236ab0caffaba761c784ebc: 2023-06-02 14:58:17,821 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=23.2 K, sizeToCheck=16.0 K 2023-06-02 14:58:17,821 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-02 14:58:17,821 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5f08ebfb8236ab0caffaba761c784ebc/info/320ceb267c8c4e0f94b3fd505557b7bd because midkey is the same as first or last row 2023-06-02 14:58:17,821 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-02 14:58:17,822 INFO [RS:0;jenkins-hbase4:36927] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-02 14:58:17,822 INFO [RS:0;jenkins-hbase4:36927] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-02 14:58:17,822 INFO [RS:0;jenkins-hbase4:36927] regionserver.HRegionServer(3303): Received CLOSE for 081a7d5337ab77ae6d277fd69317a346 2023-06-02 14:58:17,822 INFO [RS:0;jenkins-hbase4:36927] regionserver.HRegionServer(3303): Received CLOSE for 5f08ebfb8236ab0caffaba761c784ebc 2023-06-02 14:58:17,822 INFO [RS:0;jenkins-hbase4:36927] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36927,1685717850240 2023-06-02 14:58:17,822 DEBUG [RS:0;jenkins-hbase4:36927] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x00dcc112 to 127.0.0.1:52513 2023-06-02 14:58:17,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 081a7d5337ab77ae6d277fd69317a346, disabling compactions & flushes 2023-06-02 14:58:17,822 DEBUG [RS:0;jenkins-hbase4:36927] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 14:58:17,822 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685717850854.081a7d5337ab77ae6d277fd69317a346. 2023-06-02 14:58:17,822 INFO [RS:0;jenkins-hbase4:36927] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-02 14:58:17,823 INFO [RS:0;jenkins-hbase4:36927] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-02 14:58:17,823 INFO [RS:0;jenkins-hbase4:36927] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-02 14:58:17,823 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685717850854.081a7d5337ab77ae6d277fd69317a346. 2023-06-02 14:58:17,823 INFO [RS:0;jenkins-hbase4:36927] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-02 14:58:17,823 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685717850854.081a7d5337ab77ae6d277fd69317a346. after waiting 0 ms 2023-06-02 14:58:17,823 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685717850854.081a7d5337ab77ae6d277fd69317a346. 2023-06-02 14:58:17,823 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 081a7d5337ab77ae6d277fd69317a346 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-02 14:58:17,823 INFO [RS:0;jenkins-hbase4:36927] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-06-02 14:58:17,823 DEBUG [RS:0;jenkins-hbase4:36927] regionserver.HRegionServer(1478): Online Regions={081a7d5337ab77ae6d277fd69317a346=hbase:namespace,,1685717850854.081a7d5337ab77ae6d277fd69317a346., 5f08ebfb8236ab0caffaba761c784ebc=TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc., 1588230740=hbase:meta,,1.1588230740} 2023-06-02 14:58:17,823 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-02 14:58:17,823 DEBUG [RS:0;jenkins-hbase4:36927] regionserver.HRegionServer(1504): Waiting on 081a7d5337ab77ae6d277fd69317a346, 1588230740, 5f08ebfb8236ab0caffaba761c784ebc 2023-06-02 14:58:17,823 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-02 14:58:17,823 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-02 14:58:17,824 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-02 14:58:17,824 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-02 14:58:17,824 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.92 KB heapSize=5.45 KB 2023-06-02 14:58:17,824 WARN [RS_OPEN_META-regionserver/jenkins-hbase4:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:58:17,824 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C36927%2C1685717850240.meta:.meta(num 1685717850796) roll requested 2023-06-02 14:58:17,824 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-02 14:58:17,825 ERROR [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase4.apache.org,36927,1685717850240: Unrecoverable exception while closing hbase:meta,,1.1588230740 ***** org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:58:17,826 ERROR [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-06-02 14:58:17,828 WARN [Thread-735] hdfs.DataStreamer(1658): Abandoning BP-527177821-172.31.14.131-1685717849563:blk_1073741866_1048 2023-06-02 14:58:17,828 WARN [Thread-735] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46155,DS-1038c47d-9da1-4dfc-9022-e17a67ac87dc,DISK] 2023-06-02 14:58:17,829 WARN [Thread-736] hdfs.DataStreamer(1658): Abandoning BP-527177821-172.31.14.131-1685717849563:blk_1073741868_1050 2023-06-02 14:58:17,830 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-06-02 14:58:17,830 WARN [Thread-736] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46155,DS-1038c47d-9da1-4dfc-9022-e17a67ac87dc,DISK] 2023-06-02 14:58:17,832 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-06-02 14:58:17,833 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-06-02 14:58:17,833 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-06-02 14:58:17,833 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "Verbose": false, "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 999292928, "init": 513802240, "max": 2051014656, "used": 333156496 }, "NonHeapMemoryUsage": { "committed": 133980160, "init": 2555904, "max": -1, "used": 131301968 }, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-06-02 14:58:17,840 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL 2023-06-02 14:58:17,840 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.meta.1685717850796.meta with entries=11, filesize=3.69 KB; new WAL /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.meta.1685717897825.meta 2023-06-02 14:58:17,841 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36483,DS-31144227-a0cb-4645-a5fc-cf0379c94948,DISK], DatanodeInfoWithStorage[127.0.0.1:33285,DS-557da12f-a58a-4e52-8aaf-2a570ccab906,DISK]] 2023-06-02 14:58:17,841 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:58:17,841 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.meta.1685717850796.meta is not closed yet, will try archiving it next time 2023-06-02 14:58:17,841 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240/jenkins-hbase4.apache.org%2C36927%2C1685717850240.meta.1685717850796.meta; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:58:17,842 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/namespace/081a7d5337ab77ae6d277fd69317a346/.tmp/info/856116ef52bf4983b0b35828d096849b 2023-06-02 14:58:17,843 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39103] master.MasterRpcServices(609): jenkins-hbase4.apache.org,36927,1685717850240 reported a fatal error: ***** ABORTING region server jenkins-hbase4.apache.org,36927,1685717850240: Unrecoverable exception while closing hbase:meta,,1.1588230740 ***** Cause: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34867,DS-36374a88-acf3-46cb-8f32-23f63ad5facb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:58:17,849 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/namespace/081a7d5337ab77ae6d277fd69317a346/.tmp/info/856116ef52bf4983b0b35828d096849b as hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/namespace/081a7d5337ab77ae6d277fd69317a346/info/856116ef52bf4983b0b35828d096849b 2023-06-02 14:58:17,855 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/namespace/081a7d5337ab77ae6d277fd69317a346/info/856116ef52bf4983b0b35828d096849b, entries=2, sequenceid=6, filesize=4.8 K 2023-06-02 14:58:17,856 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 081a7d5337ab77ae6d277fd69317a346 in 33ms, sequenceid=6, compaction requested=false 2023-06-02 14:58:17,861 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/data/hbase/namespace/081a7d5337ab77ae6d277fd69317a346/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-02 14:58:17,862 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685717850854.081a7d5337ab77ae6d277fd69317a346. 2023-06-02 14:58:17,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 081a7d5337ab77ae6d277fd69317a346: 2023-06-02 14:58:17,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685717850854.081a7d5337ab77ae6d277fd69317a346. 2023-06-02 14:58:17,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5f08ebfb8236ab0caffaba761c784ebc, disabling compactions & flushes 2023-06-02 14:58:17,862 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc. 2023-06-02 14:58:17,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc. 2023-06-02 14:58:17,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc. after waiting 0 ms 2023-06-02 14:58:17,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc. 2023-06-02 14:58:17,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5f08ebfb8236ab0caffaba761c784ebc: 2023-06-02 14:58:17,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc. 2023-06-02 14:58:18,024 INFO [RS:0;jenkins-hbase4:36927] regionserver.HRegionServer(3303): Received CLOSE for 5f08ebfb8236ab0caffaba761c784ebc 2023-06-02 14:58:18,024 INFO [RS:0;jenkins-hbase4:36927] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-02 14:58:18,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5f08ebfb8236ab0caffaba761c784ebc, disabling compactions & flushes 2023-06-02 14:58:18,024 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-02 14:58:18,024 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc. 2023-06-02 14:58:18,024 DEBUG [RS:0;jenkins-hbase4:36927] regionserver.HRegionServer(1504): Waiting on 1588230740, 5f08ebfb8236ab0caffaba761c784ebc 2023-06-02 14:58:18,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc. 2023-06-02 14:58:18,024 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-02 14:58:18,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc. after waiting 0 ms 2023-06-02 14:58:18,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc. 2023-06-02 14:58:18,024 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-02 14:58:18,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5f08ebfb8236ab0caffaba761c784ebc: 2023-06-02 14:58:18,024 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-02 14:58:18,024 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-02 14:58:18,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnDatanodeDeath,,1685717851502.5f08ebfb8236ab0caffaba761c784ebc. 2023-06-02 14:58:18,024 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-02 14:58:18,025 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-06-02 14:58:18,100 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): regionserver:33179-0x1008c0b9b2a0005, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 14:58:18,100 INFO [RS:1;jenkins-hbase4:33179] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33179,1685717851395; zookeeper connection closed. 2023-06-02 14:58:18,100 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): regionserver:33179-0x1008c0b9b2a0005, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 14:58:18,100 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@b5f9bb3] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@b5f9bb3 2023-06-02 14:58:18,224 INFO [RS:0;jenkins-hbase4:36927] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-06-02 14:58:18,224 INFO [RS:0;jenkins-hbase4:36927] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36927,1685717850240; all regions closed. 2023-06-02 14:58:18,225 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240 2023-06-02 14:58:18,234 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/WALs/jenkins-hbase4.apache.org,36927,1685717850240 2023-06-02 14:58:18,238 DEBUG [RS:0;jenkins-hbase4:36927] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 14:58:18,238 INFO [RS:0;jenkins-hbase4:36927] regionserver.LeaseManager(133): Closed leases 2023-06-02 14:58:18,238 INFO [RS:0;jenkins-hbase4:36927] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-02 14:58:18,239 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-02 14:58:18,242 INFO [RS:0;jenkins-hbase4:36927] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36927 2023-06-02 14:58:18,244 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-02 14:58:18,244 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): regionserver:36927-0x1008c0b9b2a0001, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36927,1685717850240 2023-06-02 14:58:18,245 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36927,1685717850240] 2023-06-02 14:58:18,245 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36927,1685717850240; numProcessing=2 2023-06-02 14:58:18,248 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36927,1685717850240 already deleted, retry=false 2023-06-02 14:58:18,248 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36927,1685717850240 expired; onlineServers=0 2023-06-02 14:58:18,248 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,39103,1685717850177' ***** 2023-06-02 14:58:18,248 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-02 14:58:18,249 DEBUG [M:0;jenkins-hbase4:39103] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@581fbc54, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-02 14:58:18,249 INFO [M:0;jenkins-hbase4:39103] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39103,1685717850177 2023-06-02 14:58:18,249 INFO [M:0;jenkins-hbase4:39103] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39103,1685717850177; all regions closed. 2023-06-02 14:58:18,249 DEBUG [M:0;jenkins-hbase4:39103] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 14:58:18,249 DEBUG [M:0;jenkins-hbase4:39103] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-02 14:58:18,249 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-02 14:58:18,249 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685717850421] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685717850421,5,FailOnTimeoutGroup] 2023-06-02 14:58:18,249 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685717850421] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685717850421,5,FailOnTimeoutGroup] 2023-06-02 14:58:18,249 DEBUG [M:0;jenkins-hbase4:39103] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-02 14:58:18,251 INFO [M:0;jenkins-hbase4:39103] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-02 14:58:18,251 INFO [M:0;jenkins-hbase4:39103] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-02 14:58:18,251 INFO [M:0;jenkins-hbase4:39103] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-06-02 14:58:18,251 DEBUG [M:0;jenkins-hbase4:39103] master.HMaster(1512): Stopping service threads 2023-06-02 14:58:18,251 INFO [M:0;jenkins-hbase4:39103] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-02 14:58:18,252 ERROR [M:0;jenkins-hbase4:39103] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-02 14:58:18,252 INFO [M:0;jenkins-hbase4:39103] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-02 14:58:18,252 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-02 14:58:18,252 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-02 14:58:18,253 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:58:18,253 DEBUG [M:0;jenkins-hbase4:39103] zookeeper.ZKUtil(398): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-02 14:58:18,253 WARN [M:0;jenkins-hbase4:39103] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-02 14:58:18,253 INFO [M:0;jenkins-hbase4:39103] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-02 14:58:18,253 INFO [M:0;jenkins-hbase4:39103] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-02 14:58:18,253 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-02 14:58:18,254 DEBUG [M:0;jenkins-hbase4:39103] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-02 14:58:18,254 INFO [M:0;jenkins-hbase4:39103] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:58:18,254 DEBUG [M:0;jenkins-hbase4:39103] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:58:18,254 DEBUG [M:0;jenkins-hbase4:39103] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-02 14:58:18,254 DEBUG [M:0;jenkins-hbase4:39103] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:58:18,254 INFO [M:0;jenkins-hbase4:39103] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.07 KB heapSize=45.73 KB 2023-06-02 14:58:18,268 INFO [M:0;jenkins-hbase4:39103] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.07 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/65497428b80e4c268baa30d53c748f58 2023-06-02 14:58:18,275 DEBUG [M:0;jenkins-hbase4:39103] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/65497428b80e4c268baa30d53c748f58 as hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/65497428b80e4c268baa30d53c748f58 2023-06-02 14:58:18,281 INFO [M:0;jenkins-hbase4:39103] regionserver.HStore(1080): Added hdfs://localhost:34397/user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/65497428b80e4c268baa30d53c748f58, entries=11, sequenceid=92, filesize=7.0 K 2023-06-02 14:58:18,282 INFO [M:0;jenkins-hbase4:39103] regionserver.HRegion(2948): Finished flush of dataSize ~38.07 KB/38985, heapSize ~45.72 KB/46816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 28ms, sequenceid=92, compaction requested=false 2023-06-02 14:58:18,283 INFO [M:0;jenkins-hbase4:39103] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:58:18,283 DEBUG [M:0;jenkins-hbase4:39103] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-02 14:58:18,283 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/a9fbb613-8a1b-be6f-8005-5edcbdb1234e/MasterData/WALs/jenkins-hbase4.apache.org,39103,1685717850177 2023-06-02 14:58:18,286 INFO [M:0;jenkins-hbase4:39103] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-02 14:58:18,286 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-02 14:58:18,287 INFO [M:0;jenkins-hbase4:39103] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39103 2023-06-02 14:58:18,289 DEBUG [M:0;jenkins-hbase4:39103] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,39103,1685717850177 already deleted, retry=false 2023-06-02 14:58:18,400 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 14:58:18,400 INFO [M:0;jenkins-hbase4:39103] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39103,1685717850177; zookeeper connection closed. 2023-06-02 14:58:18,400 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): master:39103-0x1008c0b9b2a0000, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 14:58:18,500 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): regionserver:36927-0x1008c0b9b2a0001, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 14:58:18,500 INFO [RS:0;jenkins-hbase4:36927] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36927,1685717850240; zookeeper connection closed. 2023-06-02 14:58:18,501 DEBUG [Listener at localhost/46485-EventThread] zookeeper.ZKWatcher(600): regionserver:36927-0x1008c0b9b2a0001, quorum=127.0.0.1:52513, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 14:58:18,501 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4be55898] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4be55898 2023-06-02 14:58:18,502 INFO [Listener at localhost/46489] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 2 regionserver(s) complete 2023-06-02 14:58:18,502 WARN [Listener at localhost/46489] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-02 14:58:18,507 INFO [Listener at localhost/46489] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 14:58:18,515 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-02 14:58:18,523 WARN [BP-527177821-172.31.14.131-1685717849563 heartbeating to localhost/127.0.0.1:34397] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-527177821-172.31.14.131-1685717849563 (Datanode Uuid ae50a0db-af61-48db-96b8-ea952100721c) service to localhost/127.0.0.1:34397 2023-06-02 14:58:18,523 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data3/current/BP-527177821-172.31.14.131-1685717849563] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:58:18,524 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data4/current/BP-527177821-172.31.14.131-1685717849563] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:58:18,613 WARN [Listener at localhost/46489] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-02 14:58:18,620 INFO [Listener at localhost/46489] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 14:58:18,723 WARN [BP-527177821-172.31.14.131-1685717849563 heartbeating to localhost/127.0.0.1:34397] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-02 14:58:18,723 WARN [BP-527177821-172.31.14.131-1685717849563 heartbeating to localhost/127.0.0.1:34397] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-527177821-172.31.14.131-1685717849563 (Datanode Uuid bf903c2a-eebf-4615-a139-4613a5466680) service to localhost/127.0.0.1:34397 2023-06-02 14:58:18,724 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data9/current/BP-527177821-172.31.14.131-1685717849563] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:58:18,724 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/cluster_b7eaf5bf-004f-bc16-a07f-6ae6746789d4/dfs/data/data10/current/BP-527177821-172.31.14.131-1685717849563] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:58:18,736 INFO [Listener at localhost/46489] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 14:58:18,858 INFO [Listener at localhost/46489] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-02 14:58:18,887 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-02 14:58:18,897 INFO [Listener at localhost/46489] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=74 (was 52) Potentially hanging thread: LeaseRenewer:jenkins.hfs.1@localhost:34397 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: nioEventLoopGroup-15-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: nioEventLoopGroup-14-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-6-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (67771434) connection to localhost/127.0.0.1:34397 from jenkins.hfs.1 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (67771434) connection to localhost/127.0.0.1:34397 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-5-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Abort regionserver monitor java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: nioEventLoopGroup-16-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (67771434) connection to localhost/127.0.0.1:34397 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-14-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:34397 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.2@localhost:34397 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/46489 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=459 (was 440) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=57 (was 73), ProcessCount=170 (was 170), AvailableMemoryMB=1118 (was 1705) 2023-06-02 14:58:18,906 INFO [Listener at localhost/46489] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=74, OpenFileDescriptor=459, MaxFileDescriptor=60000, SystemLoadAverage=57, ProcessCount=170, AvailableMemoryMB=1117 2023-06-02 14:58:18,906 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-02 14:58:18,907 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/hadoop.log.dir so I do NOT create it in target/test-data/414c8654-16d1-3c79-3656-6c65175f582e 2023-06-02 14:58:18,907 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/acaee100-ec95-af91-c9b7-5b553504f892/hadoop.tmp.dir so I do NOT create it in target/test-data/414c8654-16d1-3c79-3656-6c65175f582e 2023-06-02 14:58:18,907 INFO [Listener at localhost/46489] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/cluster_9eb7c558-9bcd-6d5c-7838-cb2c97620326, deleteOnExit=true 2023-06-02 14:58:18,907 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-02 14:58:18,907 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/test.cache.data in system properties and HBase conf 2023-06-02 14:58:18,907 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/hadoop.tmp.dir in system properties and HBase conf 2023-06-02 14:58:18,907 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/hadoop.log.dir in system properties and HBase conf 2023-06-02 14:58:18,907 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-02 14:58:18,907 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-02 14:58:18,907 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-02 14:58:18,908 DEBUG [Listener at localhost/46489] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-02 14:58:18,908 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-02 14:58:18,908 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-02 14:58:18,908 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-02 14:58:18,908 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-02 14:58:18,908 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-02 14:58:18,908 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-02 14:58:18,908 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-02 14:58:18,908 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-02 14:58:18,909 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-02 14:58:18,909 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/nfs.dump.dir in system properties and HBase conf 2023-06-02 14:58:18,909 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/java.io.tmpdir in system properties and HBase conf 2023-06-02 14:58:18,909 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-02 14:58:18,909 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-02 14:58:18,909 INFO [Listener at localhost/46489] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-02 14:58:18,910 WARN [Listener at localhost/46489] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-02 14:58:18,913 WARN [Listener at localhost/46489] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-02 14:58:18,914 WARN [Listener at localhost/46489] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-02 14:58:18,956 WARN [Listener at localhost/46489] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 14:58:18,957 INFO [Listener at localhost/46489] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 14:58:18,962 INFO [Listener at localhost/46489] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/java.io.tmpdir/Jetty_localhost_36533_hdfs____ragwqk/webapp 2023-06-02 14:58:19,053 INFO [Listener at localhost/46489] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36533 2023-06-02 14:58:19,054 WARN [Listener at localhost/46489] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-02 14:58:19,057 WARN [Listener at localhost/46489] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-02 14:58:19,058 WARN [Listener at localhost/46489] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-02 14:58:19,099 WARN [Listener at localhost/40075] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 14:58:19,110 WARN [Listener at localhost/40075] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-02 14:58:19,113 WARN [Listener at localhost/40075] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 14:58:19,114 INFO [Listener at localhost/40075] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 14:58:19,118 INFO [Listener at localhost/40075] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/java.io.tmpdir/Jetty_localhost_43829_datanode____.f6rq7t/webapp 2023-06-02 14:58:19,208 INFO [Listener at localhost/40075] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43829 2023-06-02 14:58:19,214 WARN [Listener at localhost/42163] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 14:58:19,229 WARN [Listener at localhost/42163] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-02 14:58:19,231 WARN [Listener at localhost/42163] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 14:58:19,232 INFO [Listener at localhost/42163] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 14:58:19,237 INFO [Listener at localhost/42163] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/java.io.tmpdir/Jetty_localhost_42911_datanode____.5nw5vi/webapp 2023-06-02 14:58:19,307 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4e2ae2e1cf136a49: Processing first storage report for DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e from datanode 63fe837f-ff5e-4054-8091-a1da0e1cd059 2023-06-02 14:58:19,307 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4e2ae2e1cf136a49: from storage DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e node DatanodeRegistration(127.0.0.1:44101, datanodeUuid=63fe837f-ff5e-4054-8091-a1da0e1cd059, infoPort=39431, infoSecurePort=0, ipcPort=42163, storageInfo=lv=-57;cid=testClusterID;nsid=492265997;c=1685717898916), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:58:19,307 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4e2ae2e1cf136a49: Processing first storage report for DS-82479956-e535-4096-8b7c-48955c12b884 from datanode 63fe837f-ff5e-4054-8091-a1da0e1cd059 2023-06-02 14:58:19,307 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4e2ae2e1cf136a49: from storage DS-82479956-e535-4096-8b7c-48955c12b884 node DatanodeRegistration(127.0.0.1:44101, datanodeUuid=63fe837f-ff5e-4054-8091-a1da0e1cd059, infoPort=39431, infoSecurePort=0, ipcPort=42163, storageInfo=lv=-57;cid=testClusterID;nsid=492265997;c=1685717898916), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:58:19,333 INFO [Listener at localhost/42163] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42911 2023-06-02 14:58:19,339 WARN [Listener at localhost/44925] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 14:58:19,425 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x97b6e2fc8f1086bf: Processing first storage report for DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae from datanode e0b08de6-01b4-4232-b754-2ab02f631b92 2023-06-02 14:58:19,425 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x97b6e2fc8f1086bf: from storage DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae node DatanodeRegistration(127.0.0.1:39559, datanodeUuid=e0b08de6-01b4-4232-b754-2ab02f631b92, infoPort=41363, infoSecurePort=0, ipcPort=44925, storageInfo=lv=-57;cid=testClusterID;nsid=492265997;c=1685717898916), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:58:19,425 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x97b6e2fc8f1086bf: Processing first storage report for DS-c2db86b1-39d1-477e-8e1c-550ac39fdf66 from datanode e0b08de6-01b4-4232-b754-2ab02f631b92 2023-06-02 14:58:19,425 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x97b6e2fc8f1086bf: from storage DS-c2db86b1-39d1-477e-8e1c-550ac39fdf66 node DatanodeRegistration(127.0.0.1:39559, datanodeUuid=e0b08de6-01b4-4232-b754-2ab02f631b92, infoPort=41363, infoSecurePort=0, ipcPort=44925, storageInfo=lv=-57;cid=testClusterID;nsid=492265997;c=1685717898916), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:58:19,447 DEBUG [Listener at localhost/44925] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e 2023-06-02 14:58:19,449 INFO [Listener at localhost/44925] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/cluster_9eb7c558-9bcd-6d5c-7838-cb2c97620326/zookeeper_0, clientPort=50404, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/cluster_9eb7c558-9bcd-6d5c-7838-cb2c97620326/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/cluster_9eb7c558-9bcd-6d5c-7838-cb2c97620326/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-02 14:58:19,449 INFO [Listener at localhost/44925] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=50404 2023-06-02 14:58:19,450 INFO [Listener at localhost/44925] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:58:19,451 INFO [Listener at localhost/44925] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:58:19,461 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-02 14:58:19,465 INFO [Listener at localhost/44925] util.FSUtils(471): Created version file at hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d with version=8 2023-06-02 14:58:19,465 INFO [Listener at localhost/44925] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/hbase-staging 2023-06-02 14:58:19,467 INFO [Listener at localhost/44925] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-06-02 14:58:19,467 INFO [Listener at localhost/44925] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 14:58:19,467 INFO [Listener at localhost/44925] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-02 14:58:19,467 INFO [Listener at localhost/44925] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-02 14:58:19,467 INFO [Listener at localhost/44925] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 14:58:19,468 INFO [Listener at localhost/44925] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-02 14:58:19,468 INFO [Listener at localhost/44925] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-02 14:58:19,469 INFO [Listener at localhost/44925] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36601 2023-06-02 14:58:19,469 INFO [Listener at localhost/44925] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:58:19,470 INFO [Listener at localhost/44925] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:58:19,471 INFO [Listener at localhost/44925] zookeeper.RecoverableZooKeeper(93): Process identifier=master:36601 connecting to ZooKeeper ensemble=127.0.0.1:50404 2023-06-02 14:58:19,477 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:366010x0, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-02 14:58:19,478 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:36601-0x1008c0c5bbb0000 connected 2023-06-02 14:58:19,492 DEBUG [Listener at localhost/44925] zookeeper.ZKUtil(164): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-02 14:58:19,492 DEBUG [Listener at localhost/44925] zookeeper.ZKUtil(164): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 14:58:19,492 DEBUG [Listener at localhost/44925] zookeeper.ZKUtil(164): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-02 14:58:19,493 DEBUG [Listener at localhost/44925] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36601 2023-06-02 14:58:19,493 DEBUG [Listener at localhost/44925] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36601 2023-06-02 14:58:19,493 DEBUG [Listener at localhost/44925] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36601 2023-06-02 14:58:19,494 DEBUG [Listener at localhost/44925] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36601 2023-06-02 14:58:19,494 DEBUG [Listener at localhost/44925] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36601 2023-06-02 14:58:19,494 INFO [Listener at localhost/44925] master.HMaster(444): hbase.rootdir=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d, hbase.cluster.distributed=false 2023-06-02 14:58:19,507 INFO [Listener at localhost/44925] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-06-02 14:58:19,507 INFO [Listener at localhost/44925] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 14:58:19,507 INFO [Listener at localhost/44925] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-02 14:58:19,507 INFO [Listener at localhost/44925] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-02 14:58:19,507 INFO [Listener at localhost/44925] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 14:58:19,507 INFO [Listener at localhost/44925] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-02 14:58:19,507 INFO [Listener at localhost/44925] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-02 14:58:19,508 INFO [Listener at localhost/44925] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38033 2023-06-02 14:58:19,508 INFO [Listener at localhost/44925] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-02 14:58:19,509 DEBUG [Listener at localhost/44925] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-02 14:58:19,510 INFO [Listener at localhost/44925] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:58:19,511 INFO [Listener at localhost/44925] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:58:19,512 INFO [Listener at localhost/44925] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38033 connecting to ZooKeeper ensemble=127.0.0.1:50404 2023-06-02 14:58:19,515 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): regionserver:380330x0, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-02 14:58:19,516 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38033-0x1008c0c5bbb0001 connected 2023-06-02 14:58:19,516 DEBUG [Listener at localhost/44925] zookeeper.ZKUtil(164): regionserver:38033-0x1008c0c5bbb0001, quorum=127.0.0.1:50404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-02 14:58:19,516 DEBUG [Listener at localhost/44925] zookeeper.ZKUtil(164): regionserver:38033-0x1008c0c5bbb0001, quorum=127.0.0.1:50404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 14:58:19,517 DEBUG [Listener at localhost/44925] zookeeper.ZKUtil(164): regionserver:38033-0x1008c0c5bbb0001, quorum=127.0.0.1:50404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-02 14:58:19,517 DEBUG [Listener at localhost/44925] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38033 2023-06-02 14:58:19,518 DEBUG [Listener at localhost/44925] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38033 2023-06-02 14:58:19,518 DEBUG [Listener at localhost/44925] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38033 2023-06-02 14:58:19,518 DEBUG [Listener at localhost/44925] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38033 2023-06-02 14:58:19,518 DEBUG [Listener at localhost/44925] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38033 2023-06-02 14:58:19,519 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,36601,1685717899466 2023-06-02 14:58:19,521 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-02 14:58:19,521 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,36601,1685717899466 2023-06-02 14:58:19,523 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-02 14:58:19,523 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): regionserver:38033-0x1008c0c5bbb0001, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-02 14:58:19,523 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:58:19,523 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-02 14:58:19,524 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-02 14:58:19,524 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,36601,1685717899466 from backup master directory 2023-06-02 14:58:19,527 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,36601,1685717899466 2023-06-02 14:58:19,527 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-02 14:58:19,527 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-02 14:58:19,527 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,36601,1685717899466 2023-06-02 14:58:19,541 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/hbase.id with ID: 85a0a5f6-e907-4a70-8f8c-9ed59618718e 2023-06-02 14:58:19,553 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:58:19,555 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:58:19,562 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x7be47e69 to 127.0.0.1:50404 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-02 14:58:19,565 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@489961ad, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-02 14:58:19,566 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-02 14:58:19,566 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-02 14:58:19,566 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-02 14:58:19,567 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/MasterData/data/master/store-tmp 2023-06-02 14:58:19,575 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:58:19,575 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-02 14:58:19,575 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:58:19,575 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:58:19,575 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-02 14:58:19,576 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:58:19,576 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:58:19,576 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-02 14:58:19,576 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/MasterData/WALs/jenkins-hbase4.apache.org,36601,1685717899466 2023-06-02 14:58:19,579 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36601%2C1685717899466, suffix=, logDir=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/MasterData/WALs/jenkins-hbase4.apache.org,36601,1685717899466, archiveDir=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/MasterData/oldWALs, maxLogs=10 2023-06-02 14:58:19,587 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/MasterData/WALs/jenkins-hbase4.apache.org,36601,1685717899466/jenkins-hbase4.apache.org%2C36601%2C1685717899466.1685717899580 2023-06-02 14:58:19,587 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39559,DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae,DISK], DatanodeInfoWithStorage[127.0.0.1:44101,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK]] 2023-06-02 14:58:19,587 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-02 14:58:19,587 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:58:19,587 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-02 14:58:19,587 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-02 14:58:19,591 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-02 14:58:19,593 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-02 14:58:19,593 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-02 14:58:19,594 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:58:19,595 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-02 14:58:19,595 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-02 14:58:19,598 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-02 14:58:19,600 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-02 14:58:19,601 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=875579, jitterRate=0.1133565753698349}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-02 14:58:19,601 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-02 14:58:19,601 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-02 14:58:19,603 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-02 14:58:19,603 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-02 14:58:19,603 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-02 14:58:19,603 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-02 14:58:19,604 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-02 14:58:19,604 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-02 14:58:19,607 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-02 14:58:19,608 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-02 14:58:19,622 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-02 14:58:19,622 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-02 14:58:19,623 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-02 14:58:19,623 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-02 14:58:19,624 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-02 14:58:19,626 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:58:19,627 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-02 14:58:19,627 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-02 14:58:19,628 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-02 14:58:19,631 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-02 14:58:19,631 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): regionserver:38033-0x1008c0c5bbb0001, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-02 14:58:19,631 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:58:19,633 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,36601,1685717899466, sessionid=0x1008c0c5bbb0000, setting cluster-up flag (Was=false) 2023-06-02 14:58:19,636 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:58:19,641 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-02 14:58:19,642 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36601,1685717899466 2023-06-02 14:58:19,647 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:58:19,651 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-02 14:58:19,652 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36601,1685717899466 2023-06-02 14:58:19,653 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/.hbase-snapshot/.tmp 2023-06-02 14:58:19,657 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-02 14:58:19,657 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-02 14:58:19,657 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-02 14:58:19,657 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-02 14:58:19,657 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-02 14:58:19,657 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-06-02 14:58:19,657 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:58:19,657 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-02 14:58:19,657 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:58:19,660 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685717929660 2023-06-02 14:58:19,660 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-02 14:58:19,660 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-02 14:58:19,661 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-02 14:58:19,661 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-02 14:58:19,661 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-02 14:58:19,661 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-02 14:58:19,662 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-02 14:58:19,663 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-02 14:58:19,667 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-02 14:58:19,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-02 14:58:19,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-02 14:58:19,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-02 14:58:19,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-02 14:58:19,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-02 14:58:19,669 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685717899668,5,FailOnTimeoutGroup] 2023-06-02 14:58:19,669 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-02 14:58:19,669 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685717899669,5,FailOnTimeoutGroup] 2023-06-02 14:58:19,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-02 14:58:19,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-02 14:58:19,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-02 14:58:19,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-02 14:58:19,684 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-02 14:58:19,684 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-02 14:58:19,684 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d 2023-06-02 14:58:19,695 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:58:19,696 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-02 14:58:19,698 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/hbase/meta/1588230740/info 2023-06-02 14:58:19,698 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-02 14:58:19,699 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:58:19,699 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-02 14:58:19,700 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/hbase/meta/1588230740/rep_barrier 2023-06-02 14:58:19,700 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-02 14:58:19,701 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:58:19,701 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-02 14:58:19,702 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/hbase/meta/1588230740/table 2023-06-02 14:58:19,702 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-02 14:58:19,703 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:58:19,703 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/hbase/meta/1588230740 2023-06-02 14:58:19,704 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/hbase/meta/1588230740 2023-06-02 14:58:19,706 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-02 14:58:19,707 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-02 14:58:19,709 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-02 14:58:19,709 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=855431, jitterRate=0.08773794770240784}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-02 14:58:19,709 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-02 14:58:19,709 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-02 14:58:19,709 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-02 14:58:19,709 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-02 14:58:19,709 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-02 14:58:19,709 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-02 14:58:19,710 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-02 14:58:19,710 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-02 14:58:19,711 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-02 14:58:19,711 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-02 14:58:19,711 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-02 14:58:19,713 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-02 14:58:19,714 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-02 14:58:19,720 INFO [RS:0;jenkins-hbase4:38033] regionserver.HRegionServer(951): ClusterId : 85a0a5f6-e907-4a70-8f8c-9ed59618718e 2023-06-02 14:58:19,721 DEBUG [RS:0;jenkins-hbase4:38033] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-02 14:58:19,724 DEBUG [RS:0;jenkins-hbase4:38033] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-02 14:58:19,724 DEBUG [RS:0;jenkins-hbase4:38033] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-02 14:58:19,727 DEBUG [RS:0;jenkins-hbase4:38033] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-02 14:58:19,728 DEBUG [RS:0;jenkins-hbase4:38033] zookeeper.ReadOnlyZKClient(139): Connect 0x32c14e51 to 127.0.0.1:50404 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-02 14:58:19,734 DEBUG [RS:0;jenkins-hbase4:38033] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@36fedcd1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-02 14:58:19,734 DEBUG [RS:0;jenkins-hbase4:38033] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@73fc446a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-02 14:58:19,742 DEBUG [RS:0;jenkins-hbase4:38033] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:38033 2023-06-02 14:58:19,742 INFO [RS:0;jenkins-hbase4:38033] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-02 14:58:19,743 INFO [RS:0;jenkins-hbase4:38033] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-02 14:58:19,743 DEBUG [RS:0;jenkins-hbase4:38033] regionserver.HRegionServer(1022): About to register with Master. 2023-06-02 14:58:19,743 INFO [RS:0;jenkins-hbase4:38033] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,36601,1685717899466 with isa=jenkins-hbase4.apache.org/172.31.14.131:38033, startcode=1685717899506 2023-06-02 14:58:19,743 DEBUG [RS:0;jenkins-hbase4:38033] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-02 14:58:19,747 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60841, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-06-02 14:58:19,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36601] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38033,1685717899506 2023-06-02 14:58:19,748 DEBUG [RS:0;jenkins-hbase4:38033] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d 2023-06-02 14:58:19,748 DEBUG [RS:0;jenkins-hbase4:38033] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40075 2023-06-02 14:58:19,749 DEBUG [RS:0;jenkins-hbase4:38033] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-02 14:58:19,751 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-02 14:58:19,751 DEBUG [RS:0;jenkins-hbase4:38033] zookeeper.ZKUtil(162): regionserver:38033-0x1008c0c5bbb0001, quorum=127.0.0.1:50404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38033,1685717899506 2023-06-02 14:58:19,751 WARN [RS:0;jenkins-hbase4:38033] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-02 14:58:19,752 INFO [RS:0;jenkins-hbase4:38033] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-02 14:58:19,752 DEBUG [RS:0;jenkins-hbase4:38033] regionserver.HRegionServer(1946): logDir=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506 2023-06-02 14:58:19,752 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38033,1685717899506] 2023-06-02 14:58:19,755 DEBUG [RS:0;jenkins-hbase4:38033] zookeeper.ZKUtil(162): regionserver:38033-0x1008c0c5bbb0001, quorum=127.0.0.1:50404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38033,1685717899506 2023-06-02 14:58:19,756 DEBUG [RS:0;jenkins-hbase4:38033] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-02 14:58:19,756 INFO [RS:0;jenkins-hbase4:38033] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-02 14:58:19,757 INFO [RS:0;jenkins-hbase4:38033] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-02 14:58:19,758 INFO [RS:0;jenkins-hbase4:38033] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-02 14:58:19,758 INFO [RS:0;jenkins-hbase4:38033] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-02 14:58:19,758 INFO [RS:0;jenkins-hbase4:38033] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-02 14:58:19,759 INFO [RS:0;jenkins-hbase4:38033] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-02 14:58:19,759 DEBUG [RS:0;jenkins-hbase4:38033] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:58:19,759 DEBUG [RS:0;jenkins-hbase4:38033] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:58:19,759 DEBUG [RS:0;jenkins-hbase4:38033] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:58:19,759 DEBUG [RS:0;jenkins-hbase4:38033] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:58:19,759 DEBUG [RS:0;jenkins-hbase4:38033] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:58:19,759 DEBUG [RS:0;jenkins-hbase4:38033] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-02 14:58:19,759 DEBUG [RS:0;jenkins-hbase4:38033] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:58:19,759 DEBUG [RS:0;jenkins-hbase4:38033] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:58:19,760 DEBUG [RS:0;jenkins-hbase4:38033] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:58:19,760 DEBUG [RS:0;jenkins-hbase4:38033] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:58:19,760 INFO [RS:0;jenkins-hbase4:38033] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-02 14:58:19,760 INFO [RS:0;jenkins-hbase4:38033] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-02 14:58:19,761 INFO [RS:0;jenkins-hbase4:38033] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-02 14:58:19,778 INFO [RS:0;jenkins-hbase4:38033] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-02 14:58:19,778 INFO [RS:0;jenkins-hbase4:38033] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38033,1685717899506-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-02 14:58:19,794 INFO [RS:0;jenkins-hbase4:38033] regionserver.Replication(203): jenkins-hbase4.apache.org,38033,1685717899506 started 2023-06-02 14:58:19,794 INFO [RS:0;jenkins-hbase4:38033] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38033,1685717899506, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38033, sessionid=0x1008c0c5bbb0001 2023-06-02 14:58:19,794 DEBUG [RS:0;jenkins-hbase4:38033] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-02 14:58:19,794 DEBUG [RS:0;jenkins-hbase4:38033] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38033,1685717899506 2023-06-02 14:58:19,794 DEBUG [RS:0;jenkins-hbase4:38033] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38033,1685717899506' 2023-06-02 14:58:19,794 DEBUG [RS:0;jenkins-hbase4:38033] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-02 14:58:19,795 DEBUG [RS:0;jenkins-hbase4:38033] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-02 14:58:19,795 DEBUG [RS:0;jenkins-hbase4:38033] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-02 14:58:19,795 DEBUG [RS:0;jenkins-hbase4:38033] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-02 14:58:19,795 DEBUG [RS:0;jenkins-hbase4:38033] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38033,1685717899506 2023-06-02 14:58:19,795 DEBUG [RS:0;jenkins-hbase4:38033] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38033,1685717899506' 2023-06-02 14:58:19,795 DEBUG [RS:0;jenkins-hbase4:38033] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-02 14:58:19,795 DEBUG [RS:0;jenkins-hbase4:38033] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-02 14:58:19,796 DEBUG [RS:0;jenkins-hbase4:38033] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-02 14:58:19,796 INFO [RS:0;jenkins-hbase4:38033] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-02 14:58:19,796 INFO [RS:0;jenkins-hbase4:38033] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-02 14:58:19,864 DEBUG [jenkins-hbase4:36601] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-02 14:58:19,865 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38033,1685717899506, state=OPENING 2023-06-02 14:58:19,868 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-02 14:58:19,869 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:58:19,870 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38033,1685717899506}] 2023-06-02 14:58:19,870 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-02 14:58:19,898 INFO [RS:0;jenkins-hbase4:38033] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38033%2C1685717899506, suffix=, logDir=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506, archiveDir=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/oldWALs, maxLogs=32 2023-06-02 14:58:19,908 INFO [RS:0;jenkins-hbase4:38033] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717899899 2023-06-02 14:58:19,909 DEBUG [RS:0;jenkins-hbase4:38033] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44101,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK], DatanodeInfoWithStorage[127.0.0.1:39559,DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae,DISK]] 2023-06-02 14:58:20,024 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38033,1685717899506 2023-06-02 14:58:20,025 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-02 14:58:20,027 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43170, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-02 14:58:20,031 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-02 14:58:20,031 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-02 14:58:20,033 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38033%2C1685717899506.meta, suffix=.meta, logDir=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506, archiveDir=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/oldWALs, maxLogs=32 2023-06-02 14:58:20,047 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.meta.1685717900039.meta 2023-06-02 14:58:20,047 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39559,DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae,DISK], DatanodeInfoWithStorage[127.0.0.1:44101,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK]] 2023-06-02 14:58:20,047 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-02 14:58:20,047 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-02 14:58:20,047 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-02 14:58:20,047 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-02 14:58:20,048 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-02 14:58:20,048 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:58:20,048 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-02 14:58:20,048 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-02 14:58:20,049 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-02 14:58:20,050 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/hbase/meta/1588230740/info 2023-06-02 14:58:20,051 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/hbase/meta/1588230740/info 2023-06-02 14:58:20,051 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-02 14:58:20,052 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:58:20,052 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-02 14:58:20,052 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/hbase/meta/1588230740/rep_barrier 2023-06-02 14:58:20,052 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/hbase/meta/1588230740/rep_barrier 2023-06-02 14:58:20,053 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-02 14:58:20,053 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:58:20,053 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-02 14:58:20,054 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/hbase/meta/1588230740/table 2023-06-02 14:58:20,054 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/hbase/meta/1588230740/table 2023-06-02 14:58:20,054 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-02 14:58:20,055 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:58:20,055 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/hbase/meta/1588230740 2023-06-02 14:58:20,056 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/hbase/meta/1588230740 2023-06-02 14:58:20,058 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-02 14:58:20,060 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-02 14:58:20,061 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=715616, jitterRate=-0.09004826843738556}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-02 14:58:20,061 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-02 14:58:20,062 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685717900024 2023-06-02 14:58:20,066 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-02 14:58:20,066 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-02 14:58:20,067 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38033,1685717899506, state=OPEN 2023-06-02 14:58:20,070 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-02 14:58:20,070 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-02 14:58:20,072 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-02 14:58:20,072 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38033,1685717899506 in 200 msec 2023-06-02 14:58:20,074 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-02 14:58:20,074 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 361 msec 2023-06-02 14:58:20,076 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 421 msec 2023-06-02 14:58:20,076 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685717900076, completionTime=-1 2023-06-02 14:58:20,076 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-02 14:58:20,077 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-02 14:58:20,079 DEBUG [hconnection-0xad48cfb-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-02 14:58:20,080 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43180, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-02 14:58:20,082 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-02 14:58:20,082 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685717960082 2023-06-02 14:58:20,082 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685718020082 2023-06-02 14:58:20,082 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-06-02 14:58:20,089 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36601,1685717899466-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-02 14:58:20,089 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36601,1685717899466-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-02 14:58:20,089 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36601,1685717899466-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-02 14:58:20,089 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:36601, period=300000, unit=MILLISECONDS is enabled. 2023-06-02 14:58:20,089 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-02 14:58:20,089 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-02 14:58:20,089 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-02 14:58:20,091 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-02 14:58:20,090 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-02 14:58:20,092 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-02 14:58:20,093 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-02 14:58:20,094 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/.tmp/data/hbase/namespace/e6b850c8207c6dbdb56a2569196d5a8f 2023-06-02 14:58:20,095 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/.tmp/data/hbase/namespace/e6b850c8207c6dbdb56a2569196d5a8f empty. 2023-06-02 14:58:20,095 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/.tmp/data/hbase/namespace/e6b850c8207c6dbdb56a2569196d5a8f 2023-06-02 14:58:20,095 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-02 14:58:20,109 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-02 14:58:20,111 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => e6b850c8207c6dbdb56a2569196d5a8f, NAME => 'hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/.tmp 2023-06-02 14:58:20,118 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:58:20,118 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing e6b850c8207c6dbdb56a2569196d5a8f, disabling compactions & flushes 2023-06-02 14:58:20,118 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. 2023-06-02 14:58:20,118 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. 2023-06-02 14:58:20,118 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. after waiting 0 ms 2023-06-02 14:58:20,118 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. 2023-06-02 14:58:20,118 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. 2023-06-02 14:58:20,118 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for e6b850c8207c6dbdb56a2569196d5a8f: 2023-06-02 14:58:20,120 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-02 14:58:20,121 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685717900121"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685717900121"}]},"ts":"1685717900121"} 2023-06-02 14:58:20,123 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-02 14:58:20,124 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-02 14:58:20,124 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685717900124"}]},"ts":"1685717900124"} 2023-06-02 14:58:20,126 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-02 14:58:20,136 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e6b850c8207c6dbdb56a2569196d5a8f, ASSIGN}] 2023-06-02 14:58:20,137 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e6b850c8207c6dbdb56a2569196d5a8f, ASSIGN 2023-06-02 14:58:20,138 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=e6b850c8207c6dbdb56a2569196d5a8f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38033,1685717899506; forceNewPlan=false, retain=false 2023-06-02 14:58:20,289 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=e6b850c8207c6dbdb56a2569196d5a8f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38033,1685717899506 2023-06-02 14:58:20,290 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685717900289"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685717900289"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685717900289"}]},"ts":"1685717900289"} 2023-06-02 14:58:20,292 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure e6b850c8207c6dbdb56a2569196d5a8f, server=jenkins-hbase4.apache.org,38033,1685717899506}] 2023-06-02 14:58:20,449 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. 2023-06-02 14:58:20,449 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e6b850c8207c6dbdb56a2569196d5a8f, NAME => 'hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f.', STARTKEY => '', ENDKEY => ''} 2023-06-02 14:58:20,449 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace e6b850c8207c6dbdb56a2569196d5a8f 2023-06-02 14:58:20,449 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:58:20,449 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e6b850c8207c6dbdb56a2569196d5a8f 2023-06-02 14:58:20,449 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e6b850c8207c6dbdb56a2569196d5a8f 2023-06-02 14:58:20,451 INFO [StoreOpener-e6b850c8207c6dbdb56a2569196d5a8f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region e6b850c8207c6dbdb56a2569196d5a8f 2023-06-02 14:58:20,452 DEBUG [StoreOpener-e6b850c8207c6dbdb56a2569196d5a8f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/hbase/namespace/e6b850c8207c6dbdb56a2569196d5a8f/info 2023-06-02 14:58:20,452 DEBUG [StoreOpener-e6b850c8207c6dbdb56a2569196d5a8f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/hbase/namespace/e6b850c8207c6dbdb56a2569196d5a8f/info 2023-06-02 14:58:20,453 INFO [StoreOpener-e6b850c8207c6dbdb56a2569196d5a8f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e6b850c8207c6dbdb56a2569196d5a8f columnFamilyName info 2023-06-02 14:58:20,453 INFO [StoreOpener-e6b850c8207c6dbdb56a2569196d5a8f-1] regionserver.HStore(310): Store=e6b850c8207c6dbdb56a2569196d5a8f/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:58:20,454 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/hbase/namespace/e6b850c8207c6dbdb56a2569196d5a8f 2023-06-02 14:58:20,455 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/hbase/namespace/e6b850c8207c6dbdb56a2569196d5a8f 2023-06-02 14:58:20,457 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e6b850c8207c6dbdb56a2569196d5a8f 2023-06-02 14:58:20,459 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/hbase/namespace/e6b850c8207c6dbdb56a2569196d5a8f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-02 14:58:20,460 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e6b850c8207c6dbdb56a2569196d5a8f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=858414, jitterRate=0.09153072535991669}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-02 14:58:20,460 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e6b850c8207c6dbdb56a2569196d5a8f: 2023-06-02 14:58:20,462 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f., pid=6, masterSystemTime=1685717900445 2023-06-02 14:58:20,464 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. 2023-06-02 14:58:20,464 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. 2023-06-02 14:58:20,465 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=e6b850c8207c6dbdb56a2569196d5a8f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38033,1685717899506 2023-06-02 14:58:20,465 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685717900465"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685717900465"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685717900465"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685717900465"}]},"ts":"1685717900465"} 2023-06-02 14:58:20,469 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-02 14:58:20,470 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure e6b850c8207c6dbdb56a2569196d5a8f, server=jenkins-hbase4.apache.org,38033,1685717899506 in 175 msec 2023-06-02 14:58:20,472 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-02 14:58:20,472 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=e6b850c8207c6dbdb56a2569196d5a8f, ASSIGN in 334 msec 2023-06-02 14:58:20,473 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-02 14:58:20,473 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685717900473"}]},"ts":"1685717900473"} 2023-06-02 14:58:20,475 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-02 14:58:20,477 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-02 14:58:20,479 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 388 msec 2023-06-02 14:58:20,491 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-02 14:58:20,493 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-02 14:58:20,493 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:58:20,496 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-02 14:58:20,505 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-02 14:58:20,512 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 14 msec 2023-06-02 14:58:20,518 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-02 14:58:20,526 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-02 14:58:20,530 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 11 msec 2023-06-02 14:58:20,544 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-02 14:58:20,546 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-02 14:58:20,546 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.019sec 2023-06-02 14:58:20,546 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-02 14:58:20,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-02 14:58:20,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-02 14:58:20,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36601,1685717899466-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-02 14:58:20,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36601,1685717899466-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-02 14:58:20,549 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-02 14:58:20,621 DEBUG [Listener at localhost/44925] zookeeper.ReadOnlyZKClient(139): Connect 0x6a22797a to 127.0.0.1:50404 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-02 14:58:20,625 DEBUG [Listener at localhost/44925] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@78363e3e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-02 14:58:20,626 DEBUG [hconnection-0x31ca48bb-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-02 14:58:20,628 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37556, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-02 14:58:20,630 INFO [Listener at localhost/44925] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,36601,1685717899466 2023-06-02 14:58:20,630 INFO [Listener at localhost/44925] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:58:20,634 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-02 14:58:20,634 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:58:20,635 INFO [Listener at localhost/44925] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-02 14:58:20,635 INFO [Listener at localhost/44925] wal.TestLogRolling(429): Starting testLogRollOnPipelineRestart 2023-06-02 14:58:20,635 INFO [Listener at localhost/44925] wal.TestLogRolling(432): Replication=2 2023-06-02 14:58:20,637 DEBUG [Listener at localhost/44925] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-02 14:58:20,640 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54012, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-02 14:58:20,641 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36601] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-02 14:58:20,642 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36601] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-02 14:58:20,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36601] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-02 14:58:20,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36601] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart 2023-06-02 14:58:20,645 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_PRE_OPERATION 2023-06-02 14:58:20,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36601] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnPipelineRestart" procId is: 9 2023-06-02 14:58:20,646 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-02 14:58:20,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36601] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-02 14:58:20,648 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/f9230c768214a1e74e29f52854a9e60d 2023-06-02 14:58:20,649 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/f9230c768214a1e74e29f52854a9e60d empty. 2023-06-02 14:58:20,649 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/f9230c768214a1e74e29f52854a9e60d 2023-06-02 14:58:20,649 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnPipelineRestart regions 2023-06-02 14:58:20,662 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/.tabledesc/.tableinfo.0000000001 2023-06-02 14:58:20,663 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(7675): creating {ENCODED => f9230c768214a1e74e29f52854a9e60d, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/.tmp 2023-06-02 14:58:20,673 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:58:20,674 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1604): Closing f9230c768214a1e74e29f52854a9e60d, disabling compactions & flushes 2023-06-02 14:58:20,674 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d. 2023-06-02 14:58:20,674 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d. 2023-06-02 14:58:20,674 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d. after waiting 0 ms 2023-06-02 14:58:20,674 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d. 2023-06-02 14:58:20,674 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d. 2023-06-02 14:58:20,674 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1558): Region close journal for f9230c768214a1e74e29f52854a9e60d: 2023-06-02 14:58:20,677 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ADD_TO_META 2023-06-02 14:58:20,678 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685717900677"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685717900677"}]},"ts":"1685717900677"} 2023-06-02 14:58:20,679 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-02 14:58:20,681 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-02 14:58:20,681 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685717900681"}]},"ts":"1685717900681"} 2023-06-02 14:58:20,682 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLING in hbase:meta 2023-06-02 14:58:20,686 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=f9230c768214a1e74e29f52854a9e60d, ASSIGN}] 2023-06-02 14:58:20,688 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=f9230c768214a1e74e29f52854a9e60d, ASSIGN 2023-06-02 14:58:20,689 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=f9230c768214a1e74e29f52854a9e60d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38033,1685717899506; forceNewPlan=false, retain=false 2023-06-02 14:58:20,840 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=f9230c768214a1e74e29f52854a9e60d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38033,1685717899506 2023-06-02 14:58:20,840 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685717900840"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685717900840"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685717900840"}]},"ts":"1685717900840"} 2023-06-02 14:58:20,842 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure f9230c768214a1e74e29f52854a9e60d, server=jenkins-hbase4.apache.org,38033,1685717899506}] 2023-06-02 14:58:20,999 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d. 2023-06-02 14:58:20,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f9230c768214a1e74e29f52854a9e60d, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d.', STARTKEY => '', ENDKEY => ''} 2023-06-02 14:58:20,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnPipelineRestart f9230c768214a1e74e29f52854a9e60d 2023-06-02 14:58:20,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:58:20,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f9230c768214a1e74e29f52854a9e60d 2023-06-02 14:58:20,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f9230c768214a1e74e29f52854a9e60d 2023-06-02 14:58:21,000 INFO [StoreOpener-f9230c768214a1e74e29f52854a9e60d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region f9230c768214a1e74e29f52854a9e60d 2023-06-02 14:58:21,002 DEBUG [StoreOpener-f9230c768214a1e74e29f52854a9e60d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/default/TestLogRolling-testLogRollOnPipelineRestart/f9230c768214a1e74e29f52854a9e60d/info 2023-06-02 14:58:21,002 DEBUG [StoreOpener-f9230c768214a1e74e29f52854a9e60d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/default/TestLogRolling-testLogRollOnPipelineRestart/f9230c768214a1e74e29f52854a9e60d/info 2023-06-02 14:58:21,002 INFO [StoreOpener-f9230c768214a1e74e29f52854a9e60d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f9230c768214a1e74e29f52854a9e60d columnFamilyName info 2023-06-02 14:58:21,003 INFO [StoreOpener-f9230c768214a1e74e29f52854a9e60d-1] regionserver.HStore(310): Store=f9230c768214a1e74e29f52854a9e60d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:58:21,004 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/default/TestLogRolling-testLogRollOnPipelineRestart/f9230c768214a1e74e29f52854a9e60d 2023-06-02 14:58:21,004 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/default/TestLogRolling-testLogRollOnPipelineRestart/f9230c768214a1e74e29f52854a9e60d 2023-06-02 14:58:21,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f9230c768214a1e74e29f52854a9e60d 2023-06-02 14:58:21,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/data/default/TestLogRolling-testLogRollOnPipelineRestart/f9230c768214a1e74e29f52854a9e60d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-02 14:58:21,009 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f9230c768214a1e74e29f52854a9e60d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=873976, jitterRate=0.1113186925649643}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-02 14:58:21,009 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f9230c768214a1e74e29f52854a9e60d: 2023-06-02 14:58:21,010 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d., pid=11, masterSystemTime=1685717900995 2023-06-02 14:58:21,012 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d. 2023-06-02 14:58:21,012 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d. 2023-06-02 14:58:21,012 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=f9230c768214a1e74e29f52854a9e60d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38033,1685717899506 2023-06-02 14:58:21,013 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685717901012"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685717901012"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685717901012"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685717901012"}]},"ts":"1685717901012"} 2023-06-02 14:58:21,017 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-02 14:58:21,017 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure f9230c768214a1e74e29f52854a9e60d, server=jenkins-hbase4.apache.org,38033,1685717899506 in 173 msec 2023-06-02 14:58:21,019 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-02 14:58:21,019 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=f9230c768214a1e74e29f52854a9e60d, ASSIGN in 331 msec 2023-06-02 14:58:21,020 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-02 14:58:21,020 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685717901020"}]},"ts":"1685717901020"} 2023-06-02 14:58:21,021 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLED in hbase:meta 2023-06-02 14:58:21,025 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_POST_OPERATION 2023-06-02 14:58:21,026 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart in 383 msec 2023-06-02 14:58:23,397 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-02 14:58:25,756 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-02 14:58:25,757 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnPipelineRestart' 2023-06-02 14:58:30,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36601] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-02 14:58:30,648 INFO [Listener at localhost/44925] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnPipelineRestart, procId: 9 completed 2023-06-02 14:58:30,651 DEBUG [Listener at localhost/44925] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnPipelineRestart 2023-06-02 14:58:30,651 DEBUG [Listener at localhost/44925] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d. 2023-06-02 14:58:32,657 INFO [Listener at localhost/44925] wal.TestLogRolling(469): log.getCurrentFileName()): hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717899899 2023-06-02 14:58:32,658 WARN [Listener at localhost/44925] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-02 14:58:32,660 WARN [ResponseProcessor for block BP-1075295525-172.31.14.131-1685717898916:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1075295525-172.31.14.131-1685717898916:blk_1073741833_1009 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-02 14:58:32,660 WARN [ResponseProcessor for block BP-1075295525-172.31.14.131-1685717898916:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1075295525-172.31.14.131-1685717898916:blk_1073741829_1005 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-02 14:58:32,660 WARN [DataStreamer for file /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.meta.1685717900039.meta block BP-1075295525-172.31.14.131-1685717898916:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-1075295525-172.31.14.131-1685717898916:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:39559,DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae,DISK], DatanodeInfoWithStorage[127.0.0.1:44101,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:39559,DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae,DISK]) is bad. 2023-06-02 14:58:32,660 WARN [DataStreamer for file /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/MasterData/WALs/jenkins-hbase4.apache.org,36601,1685717899466/jenkins-hbase4.apache.org%2C36601%2C1685717899466.1685717899580 block BP-1075295525-172.31.14.131-1685717898916:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-1075295525-172.31.14.131-1685717898916:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:39559,DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae,DISK], DatanodeInfoWithStorage[127.0.0.1:44101,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:39559,DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae,DISK]) is bad. 2023-06-02 14:58:32,665 WARN [ResponseProcessor for block BP-1075295525-172.31.14.131-1685717898916:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1075295525-172.31.14.131-1685717898916:blk_1073741832_1008 java.io.IOException: Bad response ERROR for BP-1075295525-172.31.14.131-1685717898916:blk_1073741832_1008 from datanode DatanodeInfoWithStorage[127.0.0.1:39559,DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-02 14:58:32,666 WARN [DataStreamer for file /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717899899 block BP-1075295525-172.31.14.131-1685717898916:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-1075295525-172.31.14.131-1685717898916:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:44101,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK], DatanodeInfoWithStorage[127.0.0.1:39559,DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:39559,DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae,DISK]) is bad. 2023-06-02 14:58:32,666 WARN [PacketResponder: BP-1075295525-172.31.14.131-1685717898916:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:39559]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:32,669 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_378877907_17 at /127.0.0.1:33976 [Receiving block BP-1075295525-172.31.14.131-1685717898916:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:44101:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33976 dst: /127.0.0.1:44101 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:197) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:32,670 INFO [Listener at localhost/44925] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 14:58:32,672 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_378877907_17 at /127.0.0.1:33990 [Receiving block BP-1075295525-172.31.14.131-1685717898916:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:44101:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33990 dst: /127.0.0.1:44101 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:44101 remote=/127.0.0.1:33990]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:32,673 WARN [PacketResponder: BP-1075295525-172.31.14.131-1685717898916:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:44101]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:32,672 WARN [PacketResponder: BP-1075295525-172.31.14.131-1685717898916:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:44101]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:32,672 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-91772322_17 at /127.0.0.1:33944 [Receiving block BP-1075295525-172.31.14.131-1685717898916:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:44101:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33944 dst: /127.0.0.1:44101 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:44101 remote=/127.0.0.1:33944]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:32,675 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_378877907_17 at /127.0.0.1:59454 [Receiving block BP-1075295525-172.31.14.131-1685717898916:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:39559:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:59454 dst: /127.0.0.1:39559 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:32,676 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-91772322_17 at /127.0.0.1:59420 [Receiving block BP-1075295525-172.31.14.131-1685717898916:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:39559:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:59420 dst: /127.0.0.1:39559 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:32,773 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_378877907_17 at /127.0.0.1:59440 [Receiving block BP-1075295525-172.31.14.131-1685717898916:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:39559:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:59440 dst: /127.0.0.1:39559 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:32,774 WARN [BP-1075295525-172.31.14.131-1685717898916 heartbeating to localhost/127.0.0.1:40075] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-02 14:58:32,774 WARN [BP-1075295525-172.31.14.131-1685717898916 heartbeating to localhost/127.0.0.1:40075] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1075295525-172.31.14.131-1685717898916 (Datanode Uuid e0b08de6-01b4-4232-b754-2ab02f631b92) service to localhost/127.0.0.1:40075 2023-06-02 14:58:32,775 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/cluster_9eb7c558-9bcd-6d5c-7838-cb2c97620326/dfs/data/data3/current/BP-1075295525-172.31.14.131-1685717898916] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:58:32,775 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/cluster_9eb7c558-9bcd-6d5c-7838-cb2c97620326/dfs/data/data4/current/BP-1075295525-172.31.14.131-1685717898916] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:58:32,781 WARN [Listener at localhost/44925] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-02 14:58:32,784 WARN [Listener at localhost/44925] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 14:58:32,785 INFO [Listener at localhost/44925] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 14:58:32,790 INFO [Listener at localhost/44925] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/java.io.tmpdir/Jetty_localhost_38979_datanode____.pmlpdv/webapp 2023-06-02 14:58:32,879 INFO [Listener at localhost/44925] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38979 2023-06-02 14:58:32,886 WARN [Listener at localhost/38253] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 14:58:32,891 WARN [Listener at localhost/38253] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-02 14:58:32,891 WARN [ResponseProcessor for block BP-1075295525-172.31.14.131-1685717898916:blk_1073741833_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1075295525-172.31.14.131-1685717898916:blk_1073741833_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-02 14:58:32,891 WARN [ResponseProcessor for block BP-1075295525-172.31.14.131-1685717898916:blk_1073741832_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1075295525-172.31.14.131-1685717898916:blk_1073741832_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-02 14:58:32,891 WARN [ResponseProcessor for block BP-1075295525-172.31.14.131-1685717898916:blk_1073741829_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1075295525-172.31.14.131-1685717898916:blk_1073741829_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-02 14:58:32,897 INFO [Listener at localhost/38253] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 14:58:32,960 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2d356c689bf8b9ee: Processing first storage report for DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae from datanode e0b08de6-01b4-4232-b754-2ab02f631b92 2023-06-02 14:58:32,961 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2d356c689bf8b9ee: from storage DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae node DatanodeRegistration(127.0.0.1:42919, datanodeUuid=e0b08de6-01b4-4232-b754-2ab02f631b92, infoPort=40411, infoSecurePort=0, ipcPort=38253, storageInfo=lv=-57;cid=testClusterID;nsid=492265997;c=1685717898916), blocks: 6, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-02 14:58:32,961 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2d356c689bf8b9ee: Processing first storage report for DS-c2db86b1-39d1-477e-8e1c-550ac39fdf66 from datanode e0b08de6-01b4-4232-b754-2ab02f631b92 2023-06-02 14:58:32,961 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2d356c689bf8b9ee: from storage DS-c2db86b1-39d1-477e-8e1c-550ac39fdf66 node DatanodeRegistration(127.0.0.1:42919, datanodeUuid=e0b08de6-01b4-4232-b754-2ab02f631b92, infoPort=40411, infoSecurePort=0, ipcPort=38253, storageInfo=lv=-57;cid=testClusterID;nsid=492265997;c=1685717898916), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:58:33,000 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_378877907_17 at /127.0.0.1:42932 [Receiving block BP-1075295525-172.31.14.131-1685717898916:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:44101:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42932 dst: /127.0.0.1:44101 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:33,001 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_378877907_17 at /127.0.0.1:42928 [Receiving block BP-1075295525-172.31.14.131-1685717898916:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:44101:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42928 dst: /127.0.0.1:44101 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:33,001 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-91772322_17 at /127.0.0.1:42924 [Receiving block BP-1075295525-172.31.14.131-1685717898916:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:44101:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42924 dst: /127.0.0.1:44101 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:33,003 WARN [BP-1075295525-172.31.14.131-1685717898916 heartbeating to localhost/127.0.0.1:40075] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-02 14:58:33,004 WARN [BP-1075295525-172.31.14.131-1685717898916 heartbeating to localhost/127.0.0.1:40075] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1075295525-172.31.14.131-1685717898916 (Datanode Uuid 63fe837f-ff5e-4054-8091-a1da0e1cd059) service to localhost/127.0.0.1:40075 2023-06-02 14:58:33,004 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/cluster_9eb7c558-9bcd-6d5c-7838-cb2c97620326/dfs/data/data1/current/BP-1075295525-172.31.14.131-1685717898916] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:58:33,005 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/cluster_9eb7c558-9bcd-6d5c-7838-cb2c97620326/dfs/data/data2/current/BP-1075295525-172.31.14.131-1685717898916] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:58:33,011 WARN [Listener at localhost/38253] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-02 14:58:33,013 WARN [Listener at localhost/38253] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 14:58:33,014 INFO [Listener at localhost/38253] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 14:58:33,019 INFO [Listener at localhost/38253] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/java.io.tmpdir/Jetty_localhost_33779_datanode____.x9x8rk/webapp 2023-06-02 14:58:33,116 INFO [Listener at localhost/38253] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33779 2023-06-02 14:58:33,127 WARN [Listener at localhost/41413] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 14:58:33,199 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc1a41a869563faf9: Processing first storage report for DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e from datanode 63fe837f-ff5e-4054-8091-a1da0e1cd059 2023-06-02 14:58:33,199 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc1a41a869563faf9: from storage DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e node DatanodeRegistration(127.0.0.1:35165, datanodeUuid=63fe837f-ff5e-4054-8091-a1da0e1cd059, infoPort=40591, infoSecurePort=0, ipcPort=41413, storageInfo=lv=-57;cid=testClusterID;nsid=492265997;c=1685717898916), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:58:33,199 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc1a41a869563faf9: Processing first storage report for DS-82479956-e535-4096-8b7c-48955c12b884 from datanode 63fe837f-ff5e-4054-8091-a1da0e1cd059 2023-06-02 14:58:33,199 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc1a41a869563faf9: from storage DS-82479956-e535-4096-8b7c-48955c12b884 node DatanodeRegistration(127.0.0.1:35165, datanodeUuid=63fe837f-ff5e-4054-8091-a1da0e1cd059, infoPort=40591, infoSecurePort=0, ipcPort=41413, storageInfo=lv=-57;cid=testClusterID;nsid=492265997;c=1685717898916), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:58:34,135 INFO [Listener at localhost/41413] wal.TestLogRolling(481): Data Nodes restarted 2023-06-02 14:58:34,137 INFO [Listener at localhost/41413] wal.AbstractTestLogRolling(233): Validated row row1002 2023-06-02 14:58:34,138 WARN [RS:0;jenkins-hbase4:38033.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=5, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44101,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:58:34,140 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C38033%2C1685717899506:(num 1685717899899) roll requested 2023-06-02 14:58:34,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44101,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:58:34,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38033] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:37556 deadline: 1685717924137, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-06-02 14:58:34,150 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717899899 newFile=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717914140 2023-06-02 14:58:34,150 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-06-02 14:58:34,150 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717899899 with entries=5, filesize=2.11 KB; new WAL /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717914140 2023-06-02 14:58:34,150 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35165,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK], DatanodeInfoWithStorage[127.0.0.1:42919,DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae,DISK]] 2023-06-02 14:58:34,151 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717899899 is not closed yet, will try archiving it next time 2023-06-02 14:58:34,151 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44101,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:58:34,151 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717899899; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44101,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:58:46,179 INFO [Listener at localhost/41413] wal.AbstractTestLogRolling(233): Validated row row1003 2023-06-02 14:58:48,181 WARN [Listener at localhost/41413] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-02 14:58:48,183 WARN [ResponseProcessor for block BP-1075295525-172.31.14.131-1685717898916:blk_1073741838_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1075295525-172.31.14.131-1685717898916:blk_1073741838_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-02 14:58:48,183 WARN [DataStreamer for file /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717914140 block BP-1075295525-172.31.14.131-1685717898916:blk_1073741838_1017] hdfs.DataStreamer(1548): Error Recovery for BP-1075295525-172.31.14.131-1685717898916:blk_1073741838_1017 in pipeline [DatanodeInfoWithStorage[127.0.0.1:35165,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK], DatanodeInfoWithStorage[127.0.0.1:42919,DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:35165,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK]) is bad. 2023-06-02 14:58:48,187 INFO [Listener at localhost/41413] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 14:58:48,189 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_378877907_17 at /127.0.0.1:52340 [Receiving block BP-1075295525-172.31.14.131-1685717898916:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:42919:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52340 dst: /127.0.0.1:42919 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:42919 remote=/127.0.0.1:52340]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:48,189 WARN [PacketResponder: BP-1075295525-172.31.14.131-1685717898916:blk_1073741838_1017, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:42919]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:48,190 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_378877907_17 at /127.0.0.1:53982 [Receiving block BP-1075295525-172.31.14.131-1685717898916:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:35165:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:53982 dst: /127.0.0.1:35165 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:48,198 WARN [BP-1075295525-172.31.14.131-1685717898916 heartbeating to localhost/127.0.0.1:40075] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1075295525-172.31.14.131-1685717898916 (Datanode Uuid 63fe837f-ff5e-4054-8091-a1da0e1cd059) service to localhost/127.0.0.1:40075 2023-06-02 14:58:48,199 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/cluster_9eb7c558-9bcd-6d5c-7838-cb2c97620326/dfs/data/data1/current/BP-1075295525-172.31.14.131-1685717898916] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:58:48,199 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/cluster_9eb7c558-9bcd-6d5c-7838-cb2c97620326/dfs/data/data2/current/BP-1075295525-172.31.14.131-1685717898916] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:58:48,298 WARN [Listener at localhost/41413] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-02 14:58:48,300 WARN [Listener at localhost/41413] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 14:58:48,301 INFO [Listener at localhost/41413] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 14:58:48,307 INFO [Listener at localhost/41413] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/java.io.tmpdir/Jetty_localhost_41527_datanode____7mygas/webapp 2023-06-02 14:58:48,399 INFO [Listener at localhost/41413] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41527 2023-06-02 14:58:48,407 WARN [Listener at localhost/43669] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 14:58:48,410 WARN [Listener at localhost/43669] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-02 14:58:48,410 WARN [ResponseProcessor for block BP-1075295525-172.31.14.131-1685717898916:blk_1073741838_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1075295525-172.31.14.131-1685717898916:blk_1073741838_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-02 14:58:48,416 INFO [Listener at localhost/43669] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 14:58:48,473 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5c96f3d4b901102b: Processing first storage report for DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e from datanode 63fe837f-ff5e-4054-8091-a1da0e1cd059 2023-06-02 14:58:48,473 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5c96f3d4b901102b: from storage DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e node DatanodeRegistration(127.0.0.1:39165, datanodeUuid=63fe837f-ff5e-4054-8091-a1da0e1cd059, infoPort=34739, infoSecurePort=0, ipcPort=43669, storageInfo=lv=-57;cid=testClusterID;nsid=492265997;c=1685717898916), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:58:48,473 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5c96f3d4b901102b: Processing first storage report for DS-82479956-e535-4096-8b7c-48955c12b884 from datanode 63fe837f-ff5e-4054-8091-a1da0e1cd059 2023-06-02 14:58:48,473 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5c96f3d4b901102b: from storage DS-82479956-e535-4096-8b7c-48955c12b884 node DatanodeRegistration(127.0.0.1:39165, datanodeUuid=63fe837f-ff5e-4054-8091-a1da0e1cd059, infoPort=34739, infoSecurePort=0, ipcPort=43669, storageInfo=lv=-57;cid=testClusterID;nsid=492265997;c=1685717898916), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:58:48,519 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_378877907_17 at /127.0.0.1:48966 [Receiving block BP-1075295525-172.31.14.131-1685717898916:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:42919:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:48966 dst: /127.0.0.1:42919 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:58:48,521 WARN [BP-1075295525-172.31.14.131-1685717898916 heartbeating to localhost/127.0.0.1:40075] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-02 14:58:48,521 WARN [BP-1075295525-172.31.14.131-1685717898916 heartbeating to localhost/127.0.0.1:40075] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1075295525-172.31.14.131-1685717898916 (Datanode Uuid e0b08de6-01b4-4232-b754-2ab02f631b92) service to localhost/127.0.0.1:40075 2023-06-02 14:58:48,522 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/cluster_9eb7c558-9bcd-6d5c-7838-cb2c97620326/dfs/data/data3/current/BP-1075295525-172.31.14.131-1685717898916] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:58:48,522 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/cluster_9eb7c558-9bcd-6d5c-7838-cb2c97620326/dfs/data/data4/current/BP-1075295525-172.31.14.131-1685717898916] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:58:48,528 WARN [Listener at localhost/43669] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-02 14:58:48,530 WARN [Listener at localhost/43669] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 14:58:48,532 INFO [Listener at localhost/43669] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 14:58:48,537 INFO [Listener at localhost/43669] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/java.io.tmpdir/Jetty_localhost_34795_datanode____vzz4ah/webapp 2023-06-02 14:58:48,629 INFO [Listener at localhost/43669] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34795 2023-06-02 14:58:48,638 WARN [Listener at localhost/46867] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 14:58:48,704 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4ba55e8c58463d92: Processing first storage report for DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae from datanode e0b08de6-01b4-4232-b754-2ab02f631b92 2023-06-02 14:58:48,704 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4ba55e8c58463d92: from storage DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae node DatanodeRegistration(127.0.0.1:40643, datanodeUuid=e0b08de6-01b4-4232-b754-2ab02f631b92, infoPort=35173, infoSecurePort=0, ipcPort=46867, storageInfo=lv=-57;cid=testClusterID;nsid=492265997;c=1685717898916), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:58:48,704 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4ba55e8c58463d92: Processing first storage report for DS-c2db86b1-39d1-477e-8e1c-550ac39fdf66 from datanode e0b08de6-01b4-4232-b754-2ab02f631b92 2023-06-02 14:58:48,704 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4ba55e8c58463d92: from storage DS-c2db86b1-39d1-477e-8e1c-550ac39fdf66 node DatanodeRegistration(127.0.0.1:40643, datanodeUuid=e0b08de6-01b4-4232-b754-2ab02f631b92, infoPort=35173, infoSecurePort=0, ipcPort=46867, storageInfo=lv=-57;cid=testClusterID;nsid=492265997;c=1685717898916), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:58:49,642 INFO [Listener at localhost/46867] wal.TestLogRolling(498): Data Nodes restarted 2023-06-02 14:58:49,644 INFO [Listener at localhost/46867] wal.AbstractTestLogRolling(233): Validated row row1004 2023-06-02 14:58:49,645 WARN [RS:0;jenkins-hbase4:38033.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=8, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:42919,DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:58:49,645 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C38033%2C1685717899506:(num 1685717914140) roll requested 2023-06-02 14:58:49,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:42919,DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:58:49,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38033] ipc.CallRunner(144): callId: 18 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:37556 deadline: 1685717939644, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-06-02 14:58:49,654 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717914140 newFile=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717929646 2023-06-02 14:58:49,654 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-06-02 14:58:49,654 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717914140 with entries=2, filesize=2.37 KB; new WAL /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717929646 2023-06-02 14:58:49,655 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:42919,DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:58:49,655 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40643,DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae,DISK], DatanodeInfoWithStorage[127.0.0.1:39165,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK]] 2023-06-02 14:58:49,655 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717914140; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:42919,DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:58:49,655 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717914140 is not closed yet, will try archiving it next time 2023-06-02 14:58:49,661 WARN [master/jenkins-hbase4:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44101,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:58:49,661 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C36601%2C1685717899466:(num 1685717899580) roll requested 2023-06-02 14:58:49,661 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44101,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:58:49,661 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44101,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:58:49,671 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-06-02 14:58:49,671 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/MasterData/WALs/jenkins-hbase4.apache.org,36601,1685717899466/jenkins-hbase4.apache.org%2C36601%2C1685717899466.1685717899580 with entries=88, filesize=43.79 KB; new WAL /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/MasterData/WALs/jenkins-hbase4.apache.org,36601,1685717899466/jenkins-hbase4.apache.org%2C36601%2C1685717899466.1685717929661 2023-06-02 14:58:49,671 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39165,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK], DatanodeInfoWithStorage[127.0.0.1:40643,DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae,DISK]] 2023-06-02 14:58:49,671 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/MasterData/WALs/jenkins-hbase4.apache.org,36601,1685717899466/jenkins-hbase4.apache.org%2C36601%2C1685717899466.1685717899580 is not closed yet, will try archiving it next time 2023-06-02 14:58:49,671 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44101,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:58:49,671 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/MasterData/WALs/jenkins-hbase4.apache.org,36601,1685717899466/jenkins-hbase4.apache.org%2C36601%2C1685717899466.1685717899580; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44101,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:59:01,729 DEBUG [Listener at localhost/46867] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717929646 newFile=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717941720 2023-06-02 14:59:01,730 INFO [Listener at localhost/46867] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717929646 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717941720 2023-06-02 14:59:01,734 DEBUG [Listener at localhost/46867] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39165,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK], DatanodeInfoWithStorage[127.0.0.1:40643,DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae,DISK]] 2023-06-02 14:59:01,735 DEBUG [Listener at localhost/46867] wal.AbstractFSWAL(716): hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717929646 is not closed yet, will try archiving it next time 2023-06-02 14:59:01,735 DEBUG [Listener at localhost/46867] wal.TestLogRolling(512): recovering lease for hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717899899 2023-06-02 14:59:01,736 INFO [Listener at localhost/46867] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717899899 2023-06-02 14:59:01,739 WARN [IPC Server handler 3 on default port 40075] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717899899 has not been closed. Lease recovery is in progress. RecoveryId = 1022 for block blk_1073741832_1016 2023-06-02 14:59:01,741 INFO [Listener at localhost/46867] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717899899 after 5ms 2023-06-02 14:59:02,784 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@1eef45ff] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-1075295525-172.31.14.131-1685717898916:blk_1073741832_1016, datanode=DatanodeInfoWithStorage[127.0.0.1:40643,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741832_1016, replica=ReplicaWaitingToBeRecovered, blk_1073741832_1008, RWR getNumBytes() = 2160 getBytesOnDisk() = 2160 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/cluster_9eb7c558-9bcd-6d5c-7838-cb2c97620326/dfs/data/data4/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/cluster_9eb7c558-9bcd-6d5c-7838-cb2c97620326/dfs/data/data4/current/BP-1075295525-172.31.14.131-1685717898916/current/rbw/blk_1073741832 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:59:05,742 INFO [Listener at localhost/46867] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717899899 after 4006ms 2023-06-02 14:59:05,742 DEBUG [Listener at localhost/46867] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717899899 2023-06-02 14:59:05,752 DEBUG [Listener at localhost/46867] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1685717900460/Put/vlen=175/seqid=0] 2023-06-02 14:59:05,752 DEBUG [Listener at localhost/46867] wal.TestLogRolling(522): #4: [default/info:d/1685717900501/Put/vlen=9/seqid=0] 2023-06-02 14:59:05,752 DEBUG [Listener at localhost/46867] wal.TestLogRolling(522): #5: [hbase/info:d/1685717900523/Put/vlen=7/seqid=0] 2023-06-02 14:59:05,752 DEBUG [Listener at localhost/46867] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1685717901009/Put/vlen=231/seqid=0] 2023-06-02 14:59:05,752 DEBUG [Listener at localhost/46867] wal.TestLogRolling(522): #4: [row1002/info:/1685717910655/Put/vlen=1045/seqid=0] 2023-06-02 14:59:05,752 DEBUG [Listener at localhost/46867] wal.ProtobufLogReader(420): EOF at position 2160 2023-06-02 14:59:05,752 DEBUG [Listener at localhost/46867] wal.TestLogRolling(512): recovering lease for hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717914140 2023-06-02 14:59:05,752 INFO [Listener at localhost/46867] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717914140 2023-06-02 14:59:05,753 WARN [IPC Server handler 3 on default port 40075] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717914140 has not been closed. Lease recovery is in progress. RecoveryId = 1023 for block blk_1073741838_1018 2023-06-02 14:59:05,753 INFO [Listener at localhost/46867] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717914140 after 1ms 2023-06-02 14:59:06,708 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@c548ef2] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-1075295525-172.31.14.131-1685717898916:blk_1073741838_1018, datanode=DatanodeInfoWithStorage[127.0.0.1:39165,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/cluster_9eb7c558-9bcd-6d5c-7838-cb2c97620326/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/cluster_9eb7c558-9bcd-6d5c-7838-cb2c97620326/dfs/data/data1/current/BP-1075295525-172.31.14.131-1685717898916/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:348) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/cluster_9eb7c558-9bcd-6d5c-7838-cb2c97620326/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/cluster_9eb7c558-9bcd-6d5c-7838-cb2c97620326/dfs/data/data1/current/BP-1075295525-172.31.14.131-1685717898916/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy43.initReplicaRecovery(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolTranslatorPB.initReplicaRecovery(InterDatanodeProtocolTranslatorPB.java:83) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) ... 4 more 2023-06-02 14:59:09,754 INFO [Listener at localhost/46867] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717914140 after 4002ms 2023-06-02 14:59:09,754 DEBUG [Listener at localhost/46867] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717914140 2023-06-02 14:59:09,758 DEBUG [Listener at localhost/46867] wal.TestLogRolling(522): #6: [row1003/info:/1685717924174/Put/vlen=1045/seqid=0] 2023-06-02 14:59:09,758 DEBUG [Listener at localhost/46867] wal.TestLogRolling(522): #7: [row1004/info:/1685717926180/Put/vlen=1045/seqid=0] 2023-06-02 14:59:09,758 DEBUG [Listener at localhost/46867] wal.ProtobufLogReader(420): EOF at position 2425 2023-06-02 14:59:09,758 DEBUG [Listener at localhost/46867] wal.TestLogRolling(512): recovering lease for hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717929646 2023-06-02 14:59:09,758 INFO [Listener at localhost/46867] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717929646 2023-06-02 14:59:09,759 INFO [Listener at localhost/46867] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717929646 after 0ms 2023-06-02 14:59:09,759 DEBUG [Listener at localhost/46867] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717929646 2023-06-02 14:59:09,762 DEBUG [Listener at localhost/46867] wal.TestLogRolling(522): #9: [row1005/info:/1685717939717/Put/vlen=1045/seqid=0] 2023-06-02 14:59:09,762 DEBUG [Listener at localhost/46867] wal.TestLogRolling(512): recovering lease for hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717941720 2023-06-02 14:59:09,762 INFO [Listener at localhost/46867] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717941720 2023-06-02 14:59:09,762 WARN [IPC Server handler 0 on default port 40075] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717941720 has not been closed. Lease recovery is in progress. RecoveryId = 1024 for block blk_1073741841_1021 2023-06-02 14:59:09,763 INFO [Listener at localhost/46867] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717941720 after 1ms 2023-06-02 14:59:10,709 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-91772322_17 at /127.0.0.1:42320 [Receiving block BP-1075295525-172.31.14.131-1685717898916:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:39165:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42320 dst: /127.0.0.1:39165 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:39165 remote=/127.0.0.1:42320]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:59:10,711 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-91772322_17 at /127.0.0.1:39288 [Receiving block BP-1075295525-172.31.14.131-1685717898916:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:40643:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39288 dst: /127.0.0.1:40643 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:59:10,710 WARN [ResponseProcessor for block BP-1075295525-172.31.14.131-1685717898916:blk_1073741841_1021] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1075295525-172.31.14.131-1685717898916:blk_1073741841_1021 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-02 14:59:10,711 WARN [DataStreamer for file /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717941720 block BP-1075295525-172.31.14.131-1685717898916:blk_1073741841_1021] hdfs.DataStreamer(1548): Error Recovery for BP-1075295525-172.31.14.131-1685717898916:blk_1073741841_1021 in pipeline [DatanodeInfoWithStorage[127.0.0.1:39165,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK], DatanodeInfoWithStorage[127.0.0.1:40643,DS-2ab53071-6998-4e3e-bfc6-8ded8516e6ae,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:39165,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK]) is bad. 2023-06-02 14:59:10,716 WARN [DataStreamer for file /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717941720 block BP-1075295525-172.31.14.131-1685717898916:blk_1073741841_1021] hdfs.DataStreamer(823): DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1075295525-172.31.14.131-1685717898916:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:59:13,763 INFO [Listener at localhost/46867] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717941720 after 4001ms 2023-06-02 14:59:13,763 DEBUG [Listener at localhost/46867] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717941720 2023-06-02 14:59:13,767 DEBUG [Listener at localhost/46867] wal.ProtobufLogReader(420): EOF at position 83 2023-06-02 14:59:13,768 INFO [Listener at localhost/46867] regionserver.HRegion(2745): Flushing e6b850c8207c6dbdb56a2569196d5a8f 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-02 14:59:13,769 WARN [RS:0;jenkins-hbase4:38033.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=7, requesting roll of WAL org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1075295525-172.31.14.131-1685717898916:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:59:13,769 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C38033%2C1685717899506:(num 1685717941720) roll requested 2023-06-02 14:59:13,769 DEBUG [Listener at localhost/46867] regionserver.HRegion(2446): Flush status journal for e6b850c8207c6dbdb56a2569196d5a8f: 2023-06-02 14:59:13,770 INFO [Listener at localhost/46867] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1075295525-172.31.14.131-1685717898916:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:59:13,771 INFO [Listener at localhost/46867] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.95 KB heapSize=5.48 KB 2023-06-02 14:59:13,771 WARN [RS_OPEN_META-regionserver/jenkins-hbase4:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44101,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:59:13,771 DEBUG [Listener at localhost/46867] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-06-02 14:59:13,772 INFO [Listener at localhost/46867] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44101,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:59:13,773 INFO [Listener at localhost/46867] regionserver.HRegion(2745): Flushing f9230c768214a1e74e29f52854a9e60d 1/1 column families, dataSize=4.20 KB heapSize=4.75 KB 2023-06-02 14:59:13,773 DEBUG [Listener at localhost/46867] regionserver.HRegion(2446): Flush status journal for f9230c768214a1e74e29f52854a9e60d: 2023-06-02 14:59:13,773 INFO [Listener at localhost/46867] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1075295525-172.31.14.131-1685717898916:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:59:13,776 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-02 14:59:13,776 INFO [Listener at localhost/46867] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-06-02 14:59:13,776 DEBUG [Listener at localhost/46867] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6a22797a to 127.0.0.1:50404 2023-06-02 14:59:13,776 DEBUG [Listener at localhost/46867] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 14:59:13,776 DEBUG [Listener at localhost/46867] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-02 14:59:13,776 DEBUG [Listener at localhost/46867] util.JVMClusterUtil(257): Found active master hash=1502433465, stopped=false 2023-06-02 14:59:13,777 INFO [Listener at localhost/46867] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,36601,1685717899466 2023-06-02 14:59:13,780 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717941720 newFile=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717953769 2023-06-02 14:59:13,780 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL 2023-06-02 14:59:13,780 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-02 14:59:13,780 INFO [Listener at localhost/46867] procedure2.ProcedureExecutor(629): Stopping 2023-06-02 14:59:13,780 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717941720 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717953769 2023-06-02 14:59:13,780 DEBUG [Listener at localhost/46867] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7be47e69 to 127.0.0.1:50404 2023-06-02 14:59:13,780 DEBUG [Listener at localhost/46867] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 14:59:13,780 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:59:13,781 INFO [Listener at localhost/46867] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,38033,1685717899506' ***** 2023-06-02 14:59:13,781 INFO [Listener at localhost/46867] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-02 14:59:13,781 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 14:59:13,780 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): regionserver:38033-0x1008c0c5bbb0001, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-02 14:59:13,780 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1075295525-172.31.14.131-1685717898916:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:59:13,781 INFO [RS:0;jenkins-hbase4:38033] regionserver.HeapMemoryManager(220): Stopping 2023-06-02 14:59:13,781 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(462): Close of WAL hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717941720 failed. Cause="Unexpected BlockUCState: BP-1075295525-172.31.14.131-1685717898916:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) ", errors=3, hasUnflushedEntries=false 2023-06-02 14:59:13,781 INFO [RS:0;jenkins-hbase4:38033] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-02 14:59:13,782 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(426): Failed close of WAL writer hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717941720, unflushedEntries=0 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1075295525-172.31.14.131-1685717898916:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:59:13,782 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-02 14:59:13,782 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38033-0x1008c0c5bbb0001, quorum=127.0.0.1:50404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 14:59:13,782 INFO [RS:0;jenkins-hbase4:38033] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-02 14:59:13,782 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(221): Roll wal failed and waiting timeout, will not retry org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506/jenkins-hbase4.apache.org%2C38033%2C1685717899506.1685717941720, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1075295525-172.31.14.131-1685717898916:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:59:13,782 INFO [RS:0;jenkins-hbase4:38033] regionserver.HRegionServer(3303): Received CLOSE for e6b850c8207c6dbdb56a2569196d5a8f 2023-06-02 14:59:13,783 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506 2023-06-02 14:59:13,783 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:324) at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:151) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:105) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.writeWALTrailerAndMagic(ProtobufLogWriter.java:140) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.writeWALTrailer(AbstractProtobufLogWriter.java:234) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.close(ProtobufLogWriter.java:67) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doShutdown(FSHLog.java:492) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:951) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:946) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:59:13,784 INFO [RS:0;jenkins-hbase4:38033] regionserver.HRegionServer(3303): Received CLOSE for f9230c768214a1e74e29f52854a9e60d 2023-06-02 14:59:13,785 INFO [RS:0;jenkins-hbase4:38033] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38033,1685717899506 2023-06-02 14:59:13,785 DEBUG [RS:0;jenkins-hbase4:38033] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x32c14e51 to 127.0.0.1:50404 2023-06-02 14:59:13,785 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e6b850c8207c6dbdb56a2569196d5a8f, disabling compactions & flushes 2023-06-02 14:59:13,785 DEBUG [RS:0;jenkins-hbase4:38033] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 14:59:13,785 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. 2023-06-02 14:59:13,785 INFO [RS:0;jenkins-hbase4:38033] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-02 14:59:13,785 INFO [RS:0;jenkins-hbase4:38033] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-02 14:59:13,785 INFO [RS:0;jenkins-hbase4:38033] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-02 14:59:13,785 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. 2023-06-02 14:59:13,785 INFO [RS:0;jenkins-hbase4:38033] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-02 14:59:13,785 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. after waiting 0 ms 2023-06-02 14:59:13,785 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. 2023-06-02 14:59:13,786 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing e6b850c8207c6dbdb56a2569196d5a8f 1/1 column families, dataSize=78 B heapSize=728 B 2023-06-02 14:59:13,786 WARN [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2760): Received unexpected exception trying to write ABORT_FLUSH marker to WAL: java.io.IOException: Cannot append; log is closed, regionName = hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1166) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:513) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendMarker(AbstractFSWAL.java:1228) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.doFullMarkerAppendTransaction(WALUtil.java:161) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeFlushMarker(WALUtil.java:89) at org.apache.hadoop.hbase.regionserver.HRegion.doAbortFlushToWAL(HRegion.java:2758) at org.apache.hadoop.hbase.regionserver.HRegion.internalPrepareFlushCache(HRegion.java:2711) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2578) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2552) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2543) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1733) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1554) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:105) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:102) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) in region hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. 2023-06-02 14:59:13,786 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e6b850c8207c6dbdb56a2569196d5a8f: 2023-06-02 14:59:13,786 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase4.apache.org,38033,1685717899506: Unrecoverable exception while closing hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. ***** java.io.IOException: Cannot append; log is closed, regionName = hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1166) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:513) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendMarker(AbstractFSWAL.java:1228) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.doFullMarkerAppendTransaction(WALUtil.java:161) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeFlushMarker(WALUtil.java:89) at org.apache.hadoop.hbase.regionserver.HRegion.internalPrepareFlushCache(HRegion.java:2700) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2578) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2552) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2543) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1733) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1554) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:105) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:102) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:59:13,786 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-06-02 14:59:13,786 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-06-02 14:59:13,786 INFO [RS:0;jenkins-hbase4:38033] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-06-02 14:59:13,786 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/WALs/jenkins-hbase4.apache.org,38033,1685717899506 2023-06-02 14:59:13,786 DEBUG [RS:0;jenkins-hbase4:38033] regionserver.HRegionServer(1478): Online Regions={e6b850c8207c6dbdb56a2569196d5a8f=hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f., 1588230740=hbase:meta,,1.1588230740, f9230c768214a1e74e29f52854a9e60d=TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d.} 2023-06-02 14:59:13,787 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-02 14:59:13,787 INFO [RS:0;jenkins-hbase4:38033] regionserver.HRegionServer(3303): Received CLOSE for e6b850c8207c6dbdb56a2569196d5a8f 2023-06-02 14:59:13,788 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-02 14:59:13,788 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-02 14:59:13,787 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44101,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:59:13,788 DEBUG [RS:0;jenkins-hbase4:38033] regionserver.HRegionServer(1504): Waiting on 1588230740, e6b850c8207c6dbdb56a2569196d5a8f, f9230c768214a1e74e29f52854a9e60d 2023-06-02 14:59:13,788 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(165): Failed to shutdown wal java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44101,DS-c809885d-ebcb-4d22-a702-d8d4c75b3b1e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-02 14:59:13,788 DEBUG [regionserver/jenkins-hbase4:0.logRoller] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Failed log close in log roller 2023-06-02 14:59:13,788 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C38033%2C1685717899506.meta:.meta(num 1685717900039) roll requested 2023-06-02 14:59:13,788 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(874): WAL closed. Skipping rolling of writer 2023-06-02 14:59:13,788 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-02 14:59:13,788 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-06-02 14:59:13,789 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-02 14:59:13,789 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-06-02 14:59:13,789 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-02 14:59:13,789 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-06-02 14:59:13,789 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-06-02 14:59:13,789 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "Verbose": false, "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1059061760, "init": 513802240, "max": 2051014656, "used": 319839544 }, "NonHeapMemoryUsage": { "committed": 139354112, "init": 2555904, "max": -1, "used": 136777496 }, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-06-02 14:59:13,790 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36601] master.MasterRpcServices(609): jenkins-hbase4.apache.org,38033,1685717899506 reported a fatal error: ***** ABORTING region server jenkins-hbase4.apache.org,38033,1685717899506: Unrecoverable exception while closing hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. ***** Cause: java.io.IOException: Cannot append; log is closed, regionName = hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1166) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:513) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendMarker(AbstractFSWAL.java:1228) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.doFullMarkerAppendTransaction(WALUtil.java:161) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeFlushMarker(WALUtil.java:89) at org.apache.hadoop.hbase.regionserver.HRegion.internalPrepareFlushCache(HRegion.java:2700) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2578) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2552) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2543) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1733) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1554) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:105) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:102) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-06-02 14:59:13,790 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f9230c768214a1e74e29f52854a9e60d, disabling compactions & flushes 2023-06-02 14:59:13,791 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d. 2023-06-02 14:59:13,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d. 2023-06-02 14:59:13,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d. after waiting 0 ms 2023-06-02 14:59:13,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d. 2023-06-02 14:59:13,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f9230c768214a1e74e29f52854a9e60d: 2023-06-02 14:59:13,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d. 2023-06-02 14:59:13,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e6b850c8207c6dbdb56a2569196d5a8f, disabling compactions & flushes 2023-06-02 14:59:13,791 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. 2023-06-02 14:59:13,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. 2023-06-02 14:59:13,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. after waiting 0 ms 2023-06-02 14:59:13,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. 2023-06-02 14:59:13,791 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1825): Memstore data size is 78 in region hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. 2023-06-02 14:59:13,791 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. 2023-06-02 14:59:13,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e6b850c8207c6dbdb56a2569196d5a8f: 2023-06-02 14:59:13,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685717900089.e6b850c8207c6dbdb56a2569196d5a8f. 2023-06-02 14:59:13,988 INFO [RS:0;jenkins-hbase4:38033] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-02 14:59:13,988 INFO [RS:0;jenkins-hbase4:38033] regionserver.HRegionServer(3303): Received CLOSE for f9230c768214a1e74e29f52854a9e60d 2023-06-02 14:59:13,988 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-02 14:59:13,988 DEBUG [RS:0;jenkins-hbase4:38033] regionserver.HRegionServer(1504): Waiting on 1588230740, f9230c768214a1e74e29f52854a9e60d 2023-06-02 14:59:13,988 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-02 14:59:13,988 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f9230c768214a1e74e29f52854a9e60d, disabling compactions & flushes 2023-06-02 14:59:13,988 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-02 14:59:13,988 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d. 2023-06-02 14:59:13,988 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-02 14:59:13,989 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-02 14:59:13,988 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d. 2023-06-02 14:59:13,989 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d. after waiting 0 ms 2023-06-02 14:59:13,989 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d. 2023-06-02 14:59:13,989 ERROR [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1825): Memstore data size is 3024 in region hbase:meta,,1.1588230740 2023-06-02 14:59:13,989 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1825): Memstore data size is 4304 in region TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d. 2023-06-02 14:59:13,989 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-02 14:59:13,990 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d. 2023-06-02 14:59:13,990 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-02 14:59:13,990 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f9230c768214a1e74e29f52854a9e60d: 2023-06-02 14:59:13,990 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-02 14:59:13,990 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRollOnPipelineRestart,,1685717900641.f9230c768214a1e74e29f52854a9e60d. 2023-06-02 14:59:13,990 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-02 14:59:14,188 INFO [RS:0;jenkins-hbase4:38033] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38033,1685717899506; all regions closed. 2023-06-02 14:59:14,189 DEBUG [RS:0;jenkins-hbase4:38033] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 14:59:14,189 INFO [RS:0;jenkins-hbase4:38033] regionserver.LeaseManager(133): Closed leases 2023-06-02 14:59:14,189 INFO [RS:0;jenkins-hbase4:38033] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-02 14:59:14,189 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-02 14:59:14,190 INFO [RS:0;jenkins-hbase4:38033] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38033 2023-06-02 14:59:14,193 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): regionserver:38033-0x1008c0c5bbb0001, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38033,1685717899506 2023-06-02 14:59:14,193 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-02 14:59:14,193 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): regionserver:38033-0x1008c0c5bbb0001, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-02 14:59:14,195 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38033,1685717899506] 2023-06-02 14:59:14,195 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38033,1685717899506; numProcessing=1 2023-06-02 14:59:14,196 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38033,1685717899506 already deleted, retry=false 2023-06-02 14:59:14,196 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38033,1685717899506 expired; onlineServers=0 2023-06-02 14:59:14,196 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,36601,1685717899466' ***** 2023-06-02 14:59:14,196 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-02 14:59:14,197 DEBUG [M:0;jenkins-hbase4:36601] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5db2cf50, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-02 14:59:14,197 INFO [M:0;jenkins-hbase4:36601] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36601,1685717899466 2023-06-02 14:59:14,197 INFO [M:0;jenkins-hbase4:36601] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36601,1685717899466; all regions closed. 2023-06-02 14:59:14,197 DEBUG [M:0;jenkins-hbase4:36601] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 14:59:14,197 DEBUG [M:0;jenkins-hbase4:36601] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-02 14:59:14,197 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-02 14:59:14,197 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685717899669] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685717899669,5,FailOnTimeoutGroup] 2023-06-02 14:59:14,197 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685717899668] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685717899668,5,FailOnTimeoutGroup] 2023-06-02 14:59:14,197 DEBUG [M:0;jenkins-hbase4:36601] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-02 14:59:14,198 INFO [M:0;jenkins-hbase4:36601] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-02 14:59:14,198 INFO [M:0;jenkins-hbase4:36601] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-02 14:59:14,198 INFO [M:0;jenkins-hbase4:36601] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-06-02 14:59:14,198 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-02 14:59:14,198 DEBUG [M:0;jenkins-hbase4:36601] master.HMaster(1512): Stopping service threads 2023-06-02 14:59:14,199 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:59:14,199 INFO [M:0;jenkins-hbase4:36601] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-02 14:59:14,199 ERROR [M:0;jenkins-hbase4:36601] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-02 14:59:14,199 INFO [M:0;jenkins-hbase4:36601] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-02 14:59:14,199 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-02 14:59:14,199 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-02 14:59:14,200 DEBUG [M:0;jenkins-hbase4:36601] zookeeper.ZKUtil(398): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-02 14:59:14,200 WARN [M:0;jenkins-hbase4:36601] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-02 14:59:14,200 INFO [M:0;jenkins-hbase4:36601] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-02 14:59:14,200 INFO [M:0;jenkins-hbase4:36601] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-02 14:59:14,200 DEBUG [M:0;jenkins-hbase4:36601] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-02 14:59:14,200 INFO [M:0;jenkins-hbase4:36601] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:59:14,200 DEBUG [M:0;jenkins-hbase4:36601] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:59:14,200 DEBUG [M:0;jenkins-hbase4:36601] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-02 14:59:14,201 DEBUG [M:0;jenkins-hbase4:36601] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:59:14,201 INFO [M:0;jenkins-hbase4:36601] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.16 KB heapSize=45.78 KB 2023-06-02 14:59:14,214 INFO [M:0;jenkins-hbase4:36601] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.16 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/ca58bbd5c2df4190b4a9d8667794f6d6 2023-06-02 14:59:14,220 DEBUG [M:0;jenkins-hbase4:36601] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/ca58bbd5c2df4190b4a9d8667794f6d6 as hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/ca58bbd5c2df4190b4a9d8667794f6d6 2023-06-02 14:59:14,225 INFO [M:0;jenkins-hbase4:36601] regionserver.HStore(1080): Added hdfs://localhost:40075/user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/ca58bbd5c2df4190b4a9d8667794f6d6, entries=11, sequenceid=92, filesize=7.0 K 2023-06-02 14:59:14,226 INFO [M:0;jenkins-hbase4:36601] regionserver.HRegion(2948): Finished flush of dataSize ~38.16 KB/39075, heapSize ~45.77 KB/46864, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 25ms, sequenceid=92, compaction requested=false 2023-06-02 14:59:14,228 INFO [M:0;jenkins-hbase4:36601] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:59:14,228 DEBUG [M:0;jenkins-hbase4:36601] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-02 14:59:14,228 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/2aeb8d4d-9123-2471-ddce-a80801c2d01d/MasterData/WALs/jenkins-hbase4.apache.org,36601,1685717899466 2023-06-02 14:59:14,232 INFO [M:0;jenkins-hbase4:36601] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-02 14:59:14,232 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-02 14:59:14,232 INFO [M:0;jenkins-hbase4:36601] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36601 2023-06-02 14:59:14,235 DEBUG [M:0;jenkins-hbase4:36601] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,36601,1685717899466 already deleted, retry=false 2023-06-02 14:59:14,295 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): regionserver:38033-0x1008c0c5bbb0001, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 14:59:14,295 INFO [RS:0;jenkins-hbase4:38033] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38033,1685717899506; zookeeper connection closed. 2023-06-02 14:59:14,295 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): regionserver:38033-0x1008c0c5bbb0001, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 14:59:14,296 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1c048710] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1c048710 2023-06-02 14:59:14,299 INFO [Listener at localhost/46867] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-02 14:59:14,395 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 14:59:14,395 INFO [M:0;jenkins-hbase4:36601] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36601,1685717899466; zookeeper connection closed. 2023-06-02 14:59:14,396 DEBUG [Listener at localhost/44925-EventThread] zookeeper.ZKWatcher(600): master:36601-0x1008c0c5bbb0000, quorum=127.0.0.1:50404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 14:59:14,397 WARN [Listener at localhost/46867] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-02 14:59:14,400 INFO [Listener at localhost/46867] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 14:59:14,504 WARN [BP-1075295525-172.31.14.131-1685717898916 heartbeating to localhost/127.0.0.1:40075] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-02 14:59:14,504 WARN [BP-1075295525-172.31.14.131-1685717898916 heartbeating to localhost/127.0.0.1:40075] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1075295525-172.31.14.131-1685717898916 (Datanode Uuid e0b08de6-01b4-4232-b754-2ab02f631b92) service to localhost/127.0.0.1:40075 2023-06-02 14:59:14,504 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/cluster_9eb7c558-9bcd-6d5c-7838-cb2c97620326/dfs/data/data3/current/BP-1075295525-172.31.14.131-1685717898916] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:59:14,505 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/cluster_9eb7c558-9bcd-6d5c-7838-cb2c97620326/dfs/data/data4/current/BP-1075295525-172.31.14.131-1685717898916] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:59:14,506 WARN [Listener at localhost/46867] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-02 14:59:14,510 INFO [Listener at localhost/46867] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 14:59:14,613 WARN [BP-1075295525-172.31.14.131-1685717898916 heartbeating to localhost/127.0.0.1:40075] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-02 14:59:14,613 WARN [BP-1075295525-172.31.14.131-1685717898916 heartbeating to localhost/127.0.0.1:40075] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1075295525-172.31.14.131-1685717898916 (Datanode Uuid 63fe837f-ff5e-4054-8091-a1da0e1cd059) service to localhost/127.0.0.1:40075 2023-06-02 14:59:14,614 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/cluster_9eb7c558-9bcd-6d5c-7838-cb2c97620326/dfs/data/data1/current/BP-1075295525-172.31.14.131-1685717898916] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:59:14,614 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/cluster_9eb7c558-9bcd-6d5c-7838-cb2c97620326/dfs/data/data2/current/BP-1075295525-172.31.14.131-1685717898916] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 14:59:14,625 INFO [Listener at localhost/46867] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 14:59:14,736 INFO [Listener at localhost/46867] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-02 14:59:14,748 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-02 14:59:14,758 INFO [Listener at localhost/46867] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=86 (was 74) Potentially hanging thread: nioEventLoopGroup-28-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:40075 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (67771434) connection to localhost/127.0.0.1:40075 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-26-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (67771434) connection to localhost/127.0.0.1:40075 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-29-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (67771434) connection to localhost/127.0.0.1:40075 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-29-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:40075 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/46867 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=462 (was 459) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=108 (was 57) - SystemLoadAverage LEAK? -, ProcessCount=170 (was 170), AvailableMemoryMB=625 (was 1117) 2023-06-02 14:59:14,766 INFO [Listener at localhost/46867] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=86, OpenFileDescriptor=462, MaxFileDescriptor=60000, SystemLoadAverage=108, ProcessCount=170, AvailableMemoryMB=625 2023-06-02 14:59:14,766 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-02 14:59:14,766 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/hadoop.log.dir so I do NOT create it in target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34 2023-06-02 14:59:14,766 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/414c8654-16d1-3c79-3656-6c65175f582e/hadoop.tmp.dir so I do NOT create it in target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34 2023-06-02 14:59:14,766 INFO [Listener at localhost/46867] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/cluster_e4ab000b-5524-2133-9cd7-63cddd3d80e6, deleteOnExit=true 2023-06-02 14:59:14,766 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-02 14:59:14,766 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/test.cache.data in system properties and HBase conf 2023-06-02 14:59:14,766 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/hadoop.tmp.dir in system properties and HBase conf 2023-06-02 14:59:14,767 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/hadoop.log.dir in system properties and HBase conf 2023-06-02 14:59:14,767 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-02 14:59:14,767 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-02 14:59:14,767 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-02 14:59:14,767 DEBUG [Listener at localhost/46867] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-02 14:59:14,767 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-02 14:59:14,767 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-02 14:59:14,767 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-02 14:59:14,767 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-02 14:59:14,768 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-02 14:59:14,768 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-02 14:59:14,768 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-02 14:59:14,768 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-02 14:59:14,768 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-02 14:59:14,768 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/nfs.dump.dir in system properties and HBase conf 2023-06-02 14:59:14,768 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/java.io.tmpdir in system properties and HBase conf 2023-06-02 14:59:14,768 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-02 14:59:14,768 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-02 14:59:14,768 INFO [Listener at localhost/46867] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-02 14:59:14,770 WARN [Listener at localhost/46867] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-02 14:59:14,773 WARN [Listener at localhost/46867] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-02 14:59:14,773 WARN [Listener at localhost/46867] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-02 14:59:14,811 WARN [Listener at localhost/46867] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 14:59:14,813 INFO [Listener at localhost/46867] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 14:59:14,817 INFO [Listener at localhost/46867] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/java.io.tmpdir/Jetty_localhost_42051_hdfs____2ojyuw/webapp 2023-06-02 14:59:14,907 INFO [Listener at localhost/46867] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42051 2023-06-02 14:59:14,909 WARN [Listener at localhost/46867] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-02 14:59:14,912 WARN [Listener at localhost/46867] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-02 14:59:14,912 WARN [Listener at localhost/46867] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-02 14:59:14,948 WARN [Listener at localhost/39605] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 14:59:14,957 WARN [Listener at localhost/39605] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-02 14:59:14,960 WARN [Listener at localhost/39605] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 14:59:14,961 INFO [Listener at localhost/39605] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 14:59:14,965 INFO [Listener at localhost/39605] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/java.io.tmpdir/Jetty_localhost_34695_datanode____72f1vc/webapp 2023-06-02 14:59:15,055 INFO [Listener at localhost/39605] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34695 2023-06-02 14:59:15,061 WARN [Listener at localhost/38963] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 14:59:15,072 WARN [Listener at localhost/38963] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-02 14:59:15,074 WARN [Listener at localhost/38963] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 14:59:15,075 INFO [Listener at localhost/38963] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 14:59:15,079 INFO [Listener at localhost/38963] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/java.io.tmpdir/Jetty_localhost_34763_datanode____.48s0o6/webapp 2023-06-02 14:59:15,151 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5b5a7c839885e597: Processing first storage report for DS-cba1ab3d-3acc-4eb2-91e4-ef931524fd81 from datanode 111a2ee7-ccde-4e3b-939b-b51826d48503 2023-06-02 14:59:15,152 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5b5a7c839885e597: from storage DS-cba1ab3d-3acc-4eb2-91e4-ef931524fd81 node DatanodeRegistration(127.0.0.1:40699, datanodeUuid=111a2ee7-ccde-4e3b-939b-b51826d48503, infoPort=32803, infoSecurePort=0, ipcPort=38963, storageInfo=lv=-57;cid=testClusterID;nsid=1979212822;c=1685717954776), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-02 14:59:15,152 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5b5a7c839885e597: Processing first storage report for DS-384ad3f2-f959-4b81-aa49-a1906f31878f from datanode 111a2ee7-ccde-4e3b-939b-b51826d48503 2023-06-02 14:59:15,152 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5b5a7c839885e597: from storage DS-384ad3f2-f959-4b81-aa49-a1906f31878f node DatanodeRegistration(127.0.0.1:40699, datanodeUuid=111a2ee7-ccde-4e3b-939b-b51826d48503, infoPort=32803, infoSecurePort=0, ipcPort=38963, storageInfo=lv=-57;cid=testClusterID;nsid=1979212822;c=1685717954776), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:59:15,170 INFO [Listener at localhost/38963] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34763 2023-06-02 14:59:15,178 WARN [Listener at localhost/44673] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 14:59:15,268 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1a24b8fe1bb53b43: Processing first storage report for DS-9dad7ae9-015a-4543-98b4-a34247417bb6 from datanode 6c1e2dd5-54ab-4d58-9ab7-8080887caecb 2023-06-02 14:59:15,268 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1a24b8fe1bb53b43: from storage DS-9dad7ae9-015a-4543-98b4-a34247417bb6 node DatanodeRegistration(127.0.0.1:39585, datanodeUuid=6c1e2dd5-54ab-4d58-9ab7-8080887caecb, infoPort=45339, infoSecurePort=0, ipcPort=44673, storageInfo=lv=-57;cid=testClusterID;nsid=1979212822;c=1685717954776), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:59:15,269 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1a24b8fe1bb53b43: Processing first storage report for DS-dde4f184-0073-4882-b4f0-b35ba92211d8 from datanode 6c1e2dd5-54ab-4d58-9ab7-8080887caecb 2023-06-02 14:59:15,269 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1a24b8fe1bb53b43: from storage DS-dde4f184-0073-4882-b4f0-b35ba92211d8 node DatanodeRegistration(127.0.0.1:39585, datanodeUuid=6c1e2dd5-54ab-4d58-9ab7-8080887caecb, infoPort=45339, infoSecurePort=0, ipcPort=44673, storageInfo=lv=-57;cid=testClusterID;nsid=1979212822;c=1685717954776), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 14:59:15,284 DEBUG [Listener at localhost/44673] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34 2023-06-02 14:59:15,286 INFO [Listener at localhost/44673] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/cluster_e4ab000b-5524-2133-9cd7-63cddd3d80e6/zookeeper_0, clientPort=51040, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/cluster_e4ab000b-5524-2133-9cd7-63cddd3d80e6/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/cluster_e4ab000b-5524-2133-9cd7-63cddd3d80e6/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-02 14:59:15,287 INFO [Listener at localhost/44673] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=51040 2023-06-02 14:59:15,288 INFO [Listener at localhost/44673] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:59:15,288 INFO [Listener at localhost/44673] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:59:15,300 INFO [Listener at localhost/44673] util.FSUtils(471): Created version file at hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb with version=8 2023-06-02 14:59:15,300 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/hbase-staging 2023-06-02 14:59:15,302 INFO [Listener at localhost/44673] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-06-02 14:59:15,302 INFO [Listener at localhost/44673] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 14:59:15,302 INFO [Listener at localhost/44673] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-02 14:59:15,302 INFO [Listener at localhost/44673] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-02 14:59:15,302 INFO [Listener at localhost/44673] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 14:59:15,302 INFO [Listener at localhost/44673] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-02 14:59:15,302 INFO [Listener at localhost/44673] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-02 14:59:15,303 INFO [Listener at localhost/44673] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34025 2023-06-02 14:59:15,304 INFO [Listener at localhost/44673] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:59:15,304 INFO [Listener at localhost/44673] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:59:15,305 INFO [Listener at localhost/44673] zookeeper.RecoverableZooKeeper(93): Process identifier=master:34025 connecting to ZooKeeper ensemble=127.0.0.1:51040 2023-06-02 14:59:15,314 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:340250x0, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-02 14:59:15,315 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:34025-0x1008c0d35d90000 connected 2023-06-02 14:59:15,345 DEBUG [Listener at localhost/44673] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-02 14:59:15,346 DEBUG [Listener at localhost/44673] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 14:59:15,347 DEBUG [Listener at localhost/44673] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-02 14:59:15,347 DEBUG [Listener at localhost/44673] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34025 2023-06-02 14:59:15,348 DEBUG [Listener at localhost/44673] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34025 2023-06-02 14:59:15,348 DEBUG [Listener at localhost/44673] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34025 2023-06-02 14:59:15,349 DEBUG [Listener at localhost/44673] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34025 2023-06-02 14:59:15,349 DEBUG [Listener at localhost/44673] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34025 2023-06-02 14:59:15,349 INFO [Listener at localhost/44673] master.HMaster(444): hbase.rootdir=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb, hbase.cluster.distributed=false 2023-06-02 14:59:15,362 INFO [Listener at localhost/44673] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-06-02 14:59:15,363 INFO [Listener at localhost/44673] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 14:59:15,363 INFO [Listener at localhost/44673] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-02 14:59:15,363 INFO [Listener at localhost/44673] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-02 14:59:15,363 INFO [Listener at localhost/44673] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 14:59:15,363 INFO [Listener at localhost/44673] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-02 14:59:15,363 INFO [Listener at localhost/44673] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-02 14:59:15,364 INFO [Listener at localhost/44673] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36801 2023-06-02 14:59:15,365 INFO [Listener at localhost/44673] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-02 14:59:15,367 DEBUG [Listener at localhost/44673] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-02 14:59:15,368 INFO [Listener at localhost/44673] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:59:15,369 INFO [Listener at localhost/44673] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:59:15,370 INFO [Listener at localhost/44673] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36801 connecting to ZooKeeper ensemble=127.0.0.1:51040 2023-06-02 14:59:15,373 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:368010x0, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-02 14:59:15,374 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36801-0x1008c0d35d90001 connected 2023-06-02 14:59:15,374 DEBUG [Listener at localhost/44673] zookeeper.ZKUtil(164): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-02 14:59:15,374 DEBUG [Listener at localhost/44673] zookeeper.ZKUtil(164): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 14:59:15,375 DEBUG [Listener at localhost/44673] zookeeper.ZKUtil(164): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-02 14:59:15,375 DEBUG [Listener at localhost/44673] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36801 2023-06-02 14:59:15,376 DEBUG [Listener at localhost/44673] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36801 2023-06-02 14:59:15,376 DEBUG [Listener at localhost/44673] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36801 2023-06-02 14:59:15,376 DEBUG [Listener at localhost/44673] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36801 2023-06-02 14:59:15,376 DEBUG [Listener at localhost/44673] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36801 2023-06-02 14:59:15,377 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,34025,1685717955301 2023-06-02 14:59:15,379 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-02 14:59:15,379 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,34025,1685717955301 2023-06-02 14:59:15,381 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-02 14:59:15,381 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-02 14:59:15,381 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:59:15,382 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-02 14:59:15,382 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-02 14:59:15,382 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,34025,1685717955301 from backup master directory 2023-06-02 14:59:15,385 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,34025,1685717955301 2023-06-02 14:59:15,385 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-02 14:59:15,385 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-02 14:59:15,385 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,34025,1685717955301 2023-06-02 14:59:15,399 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/hbase.id with ID: f4c7cadd-43da-4a68-9895-65a165f5dc9b 2023-06-02 14:59:15,411 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:59:15,413 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:59:15,422 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x356b32c6 to 127.0.0.1:51040 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-02 14:59:15,426 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@193e7ad0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-02 14:59:15,426 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-02 14:59:15,427 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-02 14:59:15,427 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-02 14:59:15,428 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/MasterData/data/master/store-tmp 2023-06-02 14:59:15,435 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:59:15,435 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-02 14:59:15,436 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:59:15,436 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:59:15,436 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-02 14:59:15,436 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:59:15,436 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 14:59:15,436 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-02 14:59:15,436 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/MasterData/WALs/jenkins-hbase4.apache.org,34025,1685717955301 2023-06-02 14:59:15,439 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34025%2C1685717955301, suffix=, logDir=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/MasterData/WALs/jenkins-hbase4.apache.org,34025,1685717955301, archiveDir=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/MasterData/oldWALs, maxLogs=10 2023-06-02 14:59:15,447 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/MasterData/WALs/jenkins-hbase4.apache.org,34025,1685717955301/jenkins-hbase4.apache.org%2C34025%2C1685717955301.1685717955439 2023-06-02 14:59:15,447 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39585,DS-9dad7ae9-015a-4543-98b4-a34247417bb6,DISK], DatanodeInfoWithStorage[127.0.0.1:40699,DS-cba1ab3d-3acc-4eb2-91e4-ef931524fd81,DISK]] 2023-06-02 14:59:15,447 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-02 14:59:15,448 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:59:15,448 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-02 14:59:15,448 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-02 14:59:15,449 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-02 14:59:15,451 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-02 14:59:15,451 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-02 14:59:15,452 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:59:15,452 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-02 14:59:15,453 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-02 14:59:15,456 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-02 14:59:15,459 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-02 14:59:15,459 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=791085, jitterRate=0.00591684877872467}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-02 14:59:15,459 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-02 14:59:15,462 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-02 14:59:15,463 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-02 14:59:15,463 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-02 14:59:15,463 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-02 14:59:15,464 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-02 14:59:15,464 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-02 14:59:15,464 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-02 14:59:15,465 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-02 14:59:15,466 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-02 14:59:15,477 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-02 14:59:15,477 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-02 14:59:15,477 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-02 14:59:15,477 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-02 14:59:15,478 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-02 14:59:15,481 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:59:15,482 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-02 14:59:15,482 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-02 14:59:15,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-02 14:59:15,484 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-02 14:59:15,484 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:59:15,484 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-02 14:59:15,485 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,34025,1685717955301, sessionid=0x1008c0d35d90000, setting cluster-up flag (Was=false) 2023-06-02 14:59:15,488 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:59:15,496 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-02 14:59:15,496 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34025,1685717955301 2023-06-02 14:59:15,500 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:59:15,505 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-02 14:59:15,506 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34025,1685717955301 2023-06-02 14:59:15,506 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/.hbase-snapshot/.tmp 2023-06-02 14:59:15,509 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-02 14:59:15,509 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-02 14:59:15,509 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-02 14:59:15,509 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-02 14:59:15,509 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-02 14:59:15,509 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-06-02 14:59:15,509 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:59:15,509 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-02 14:59:15,510 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:59:15,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685717985514 2023-06-02 14:59:15,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-02 14:59:15,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-02 14:59:15,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-02 14:59:15,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-02 14:59:15,515 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-02 14:59:15,515 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-02 14:59:15,516 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-02 14:59:15,516 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-02 14:59:15,516 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-02 14:59:15,516 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-02 14:59:15,517 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-02 14:59:15,517 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-02 14:59:15,517 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-02 14:59:15,517 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-02 14:59:15,517 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685717955517,5,FailOnTimeoutGroup] 2023-06-02 14:59:15,518 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685717955517,5,FailOnTimeoutGroup] 2023-06-02 14:59:15,518 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-02 14:59:15,518 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-02 14:59:15,518 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-02 14:59:15,518 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-02 14:59:15,518 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-02 14:59:15,530 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-02 14:59:15,531 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-02 14:59:15,531 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb 2023-06-02 14:59:15,542 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:59:15,544 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-02 14:59:15,545 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/1588230740/info 2023-06-02 14:59:15,545 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-02 14:59:15,546 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:59:15,546 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-02 14:59:15,547 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/1588230740/rep_barrier 2023-06-02 14:59:15,548 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-02 14:59:15,548 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:59:15,548 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-02 14:59:15,550 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/1588230740/table 2023-06-02 14:59:15,550 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-02 14:59:15,550 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:59:15,551 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/1588230740 2023-06-02 14:59:15,552 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/1588230740 2023-06-02 14:59:15,554 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-02 14:59:15,555 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-02 14:59:15,557 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-02 14:59:15,557 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=755391, jitterRate=-0.039471715688705444}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-02 14:59:15,557 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-02 14:59:15,557 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-02 14:59:15,557 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-02 14:59:15,557 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-02 14:59:15,557 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-02 14:59:15,557 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-02 14:59:15,558 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-02 14:59:15,558 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-02 14:59:15,559 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-02 14:59:15,559 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-02 14:59:15,559 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-02 14:59:15,561 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-02 14:59:15,562 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-02 14:59:15,578 INFO [RS:0;jenkins-hbase4:36801] regionserver.HRegionServer(951): ClusterId : f4c7cadd-43da-4a68-9895-65a165f5dc9b 2023-06-02 14:59:15,578 DEBUG [RS:0;jenkins-hbase4:36801] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-02 14:59:15,582 DEBUG [RS:0;jenkins-hbase4:36801] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-02 14:59:15,582 DEBUG [RS:0;jenkins-hbase4:36801] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-02 14:59:15,583 DEBUG [RS:0;jenkins-hbase4:36801] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-02 14:59:15,584 DEBUG [RS:0;jenkins-hbase4:36801] zookeeper.ReadOnlyZKClient(139): Connect 0x19ebacea to 127.0.0.1:51040 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-02 14:59:15,588 DEBUG [RS:0;jenkins-hbase4:36801] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@172c41a7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-02 14:59:15,588 DEBUG [RS:0;jenkins-hbase4:36801] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1313c79a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-02 14:59:15,596 DEBUG [RS:0;jenkins-hbase4:36801] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:36801 2023-06-02 14:59:15,596 INFO [RS:0;jenkins-hbase4:36801] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-02 14:59:15,596 INFO [RS:0;jenkins-hbase4:36801] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-02 14:59:15,596 DEBUG [RS:0;jenkins-hbase4:36801] regionserver.HRegionServer(1022): About to register with Master. 2023-06-02 14:59:15,597 INFO [RS:0;jenkins-hbase4:36801] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,34025,1685717955301 with isa=jenkins-hbase4.apache.org/172.31.14.131:36801, startcode=1685717955362 2023-06-02 14:59:15,597 DEBUG [RS:0;jenkins-hbase4:36801] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-02 14:59:15,600 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41085, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-06-02 14:59:15,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:15,602 DEBUG [RS:0;jenkins-hbase4:36801] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb 2023-06-02 14:59:15,602 DEBUG [RS:0;jenkins-hbase4:36801] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39605 2023-06-02 14:59:15,602 DEBUG [RS:0;jenkins-hbase4:36801] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-02 14:59:15,603 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-02 14:59:15,604 DEBUG [RS:0;jenkins-hbase4:36801] zookeeper.ZKUtil(162): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:15,604 WARN [RS:0;jenkins-hbase4:36801] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-02 14:59:15,604 INFO [RS:0;jenkins-hbase4:36801] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-02 14:59:15,604 DEBUG [RS:0;jenkins-hbase4:36801] regionserver.HRegionServer(1946): logDir=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/WALs/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:15,604 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36801,1685717955362] 2023-06-02 14:59:15,609 DEBUG [RS:0;jenkins-hbase4:36801] zookeeper.ZKUtil(162): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:15,609 DEBUG [RS:0;jenkins-hbase4:36801] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-02 14:59:15,609 INFO [RS:0;jenkins-hbase4:36801] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-02 14:59:15,611 INFO [RS:0;jenkins-hbase4:36801] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-02 14:59:15,612 INFO [RS:0;jenkins-hbase4:36801] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-02 14:59:15,612 INFO [RS:0;jenkins-hbase4:36801] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-02 14:59:15,612 INFO [RS:0;jenkins-hbase4:36801] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-02 14:59:15,614 INFO [RS:0;jenkins-hbase4:36801] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-02 14:59:15,614 DEBUG [RS:0;jenkins-hbase4:36801] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:59:15,614 DEBUG [RS:0;jenkins-hbase4:36801] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:59:15,614 DEBUG [RS:0;jenkins-hbase4:36801] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:59:15,614 DEBUG [RS:0;jenkins-hbase4:36801] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:59:15,614 DEBUG [RS:0;jenkins-hbase4:36801] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:59:15,614 DEBUG [RS:0;jenkins-hbase4:36801] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-02 14:59:15,614 DEBUG [RS:0;jenkins-hbase4:36801] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:59:15,615 DEBUG [RS:0;jenkins-hbase4:36801] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:59:15,615 DEBUG [RS:0;jenkins-hbase4:36801] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:59:15,615 DEBUG [RS:0;jenkins-hbase4:36801] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 14:59:15,615 INFO [RS:0;jenkins-hbase4:36801] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-02 14:59:15,615 INFO [RS:0;jenkins-hbase4:36801] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-02 14:59:15,616 INFO [RS:0;jenkins-hbase4:36801] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-02 14:59:15,628 INFO [RS:0;jenkins-hbase4:36801] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-02 14:59:15,628 INFO [RS:0;jenkins-hbase4:36801] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36801,1685717955362-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-02 14:59:15,639 INFO [RS:0;jenkins-hbase4:36801] regionserver.Replication(203): jenkins-hbase4.apache.org,36801,1685717955362 started 2023-06-02 14:59:15,639 INFO [RS:0;jenkins-hbase4:36801] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36801,1685717955362, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36801, sessionid=0x1008c0d35d90001 2023-06-02 14:59:15,639 DEBUG [RS:0;jenkins-hbase4:36801] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-02 14:59:15,639 DEBUG [RS:0;jenkins-hbase4:36801] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:15,639 DEBUG [RS:0;jenkins-hbase4:36801] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36801,1685717955362' 2023-06-02 14:59:15,639 DEBUG [RS:0;jenkins-hbase4:36801] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-02 14:59:15,639 DEBUG [RS:0;jenkins-hbase4:36801] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-02 14:59:15,640 DEBUG [RS:0;jenkins-hbase4:36801] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-02 14:59:15,640 DEBUG [RS:0;jenkins-hbase4:36801] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-02 14:59:15,640 DEBUG [RS:0;jenkins-hbase4:36801] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:15,640 DEBUG [RS:0;jenkins-hbase4:36801] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36801,1685717955362' 2023-06-02 14:59:15,640 DEBUG [RS:0;jenkins-hbase4:36801] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-02 14:59:15,640 DEBUG [RS:0;jenkins-hbase4:36801] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-02 14:59:15,640 DEBUG [RS:0;jenkins-hbase4:36801] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-02 14:59:15,640 INFO [RS:0;jenkins-hbase4:36801] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-02 14:59:15,640 INFO [RS:0;jenkins-hbase4:36801] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-02 14:59:15,712 DEBUG [jenkins-hbase4:34025] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-02 14:59:15,713 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,36801,1685717955362, state=OPENING 2023-06-02 14:59:15,716 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-02 14:59:15,717 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:59:15,717 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-02 14:59:15,717 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,36801,1685717955362}] 2023-06-02 14:59:15,742 INFO [RS:0;jenkins-hbase4:36801] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36801%2C1685717955362, suffix=, logDir=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/WALs/jenkins-hbase4.apache.org,36801,1685717955362, archiveDir=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/oldWALs, maxLogs=32 2023-06-02 14:59:15,751 INFO [RS:0;jenkins-hbase4:36801] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/WALs/jenkins-hbase4.apache.org,36801,1685717955362/jenkins-hbase4.apache.org%2C36801%2C1685717955362.1685717955743 2023-06-02 14:59:15,752 DEBUG [RS:0;jenkins-hbase4:36801] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39585,DS-9dad7ae9-015a-4543-98b4-a34247417bb6,DISK], DatanodeInfoWithStorage[127.0.0.1:40699,DS-cba1ab3d-3acc-4eb2-91e4-ef931524fd81,DISK]] 2023-06-02 14:59:15,764 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-02 14:59:15,872 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:15,872 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-02 14:59:15,874 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36178, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-02 14:59:15,877 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-02 14:59:15,877 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-02 14:59:15,879 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36801%2C1685717955362.meta, suffix=.meta, logDir=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/WALs/jenkins-hbase4.apache.org,36801,1685717955362, archiveDir=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/oldWALs, maxLogs=32 2023-06-02 14:59:15,887 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/WALs/jenkins-hbase4.apache.org,36801,1685717955362/jenkins-hbase4.apache.org%2C36801%2C1685717955362.meta.1685717955879.meta 2023-06-02 14:59:15,887 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39585,DS-9dad7ae9-015a-4543-98b4-a34247417bb6,DISK], DatanodeInfoWithStorage[127.0.0.1:40699,DS-cba1ab3d-3acc-4eb2-91e4-ef931524fd81,DISK]] 2023-06-02 14:59:15,887 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-02 14:59:15,887 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-02 14:59:15,887 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-02 14:59:15,887 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-02 14:59:15,887 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-02 14:59:15,887 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:59:15,887 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-02 14:59:15,888 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-02 14:59:15,889 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-02 14:59:15,890 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/1588230740/info 2023-06-02 14:59:15,890 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/1588230740/info 2023-06-02 14:59:15,890 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-02 14:59:15,891 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:59:15,891 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-02 14:59:15,891 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/1588230740/rep_barrier 2023-06-02 14:59:15,892 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/1588230740/rep_barrier 2023-06-02 14:59:15,892 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-02 14:59:15,892 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:59:15,892 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-02 14:59:15,893 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/1588230740/table 2023-06-02 14:59:15,893 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/1588230740/table 2023-06-02 14:59:15,893 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-02 14:59:15,894 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:59:15,895 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/1588230740 2023-06-02 14:59:15,896 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/1588230740 2023-06-02 14:59:15,898 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-02 14:59:15,899 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-02 14:59:15,900 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=855751, jitterRate=0.08814394474029541}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-02 14:59:15,900 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-02 14:59:15,903 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685717955871 2023-06-02 14:59:15,907 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-02 14:59:15,907 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-02 14:59:15,908 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,36801,1685717955362, state=OPEN 2023-06-02 14:59:15,910 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-02 14:59:15,910 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-02 14:59:15,912 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-02 14:59:15,912 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,36801,1685717955362 in 193 msec 2023-06-02 14:59:15,914 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-02 14:59:15,914 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 353 msec 2023-06-02 14:59:15,916 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 408 msec 2023-06-02 14:59:15,916 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685717955916, completionTime=-1 2023-06-02 14:59:15,917 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-02 14:59:15,917 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-02 14:59:15,919 DEBUG [hconnection-0x7d82bedc-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-02 14:59:15,923 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36190, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-02 14:59:15,924 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-02 14:59:15,924 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685718015924 2023-06-02 14:59:15,924 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685718075924 2023-06-02 14:59:15,924 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 7 msec 2023-06-02 14:59:15,931 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34025,1685717955301-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-02 14:59:15,931 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34025,1685717955301-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-02 14:59:15,931 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34025,1685717955301-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-02 14:59:15,931 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:34025, period=300000, unit=MILLISECONDS is enabled. 2023-06-02 14:59:15,931 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-02 14:59:15,931 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-02 14:59:15,931 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-02 14:59:15,932 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-02 14:59:15,932 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-02 14:59:15,933 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-02 14:59:15,934 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-02 14:59:15,938 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/.tmp/data/hbase/namespace/cdd132f177c2617aff0760092dc0798a 2023-06-02 14:59:15,939 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/.tmp/data/hbase/namespace/cdd132f177c2617aff0760092dc0798a empty. 2023-06-02 14:59:15,939 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/.tmp/data/hbase/namespace/cdd132f177c2617aff0760092dc0798a 2023-06-02 14:59:15,939 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-02 14:59:15,951 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-02 14:59:15,953 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => cdd132f177c2617aff0760092dc0798a, NAME => 'hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/.tmp 2023-06-02 14:59:15,962 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:59:15,962 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing cdd132f177c2617aff0760092dc0798a, disabling compactions & flushes 2023-06-02 14:59:15,962 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a. 2023-06-02 14:59:15,962 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a. 2023-06-02 14:59:15,962 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a. after waiting 0 ms 2023-06-02 14:59:15,962 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a. 2023-06-02 14:59:15,962 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a. 2023-06-02 14:59:15,962 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for cdd132f177c2617aff0760092dc0798a: 2023-06-02 14:59:15,964 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-02 14:59:15,965 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685717955965"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685717955965"}]},"ts":"1685717955965"} 2023-06-02 14:59:15,967 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-02 14:59:15,968 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-02 14:59:15,969 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685717955968"}]},"ts":"1685717955968"} 2023-06-02 14:59:15,970 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-02 14:59:15,977 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=cdd132f177c2617aff0760092dc0798a, ASSIGN}] 2023-06-02 14:59:15,979 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=cdd132f177c2617aff0760092dc0798a, ASSIGN 2023-06-02 14:59:15,979 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=cdd132f177c2617aff0760092dc0798a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36801,1685717955362; forceNewPlan=false, retain=false 2023-06-02 14:59:16,131 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=cdd132f177c2617aff0760092dc0798a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:16,131 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685717956131"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685717956131"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685717956131"}]},"ts":"1685717956131"} 2023-06-02 14:59:16,133 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure cdd132f177c2617aff0760092dc0798a, server=jenkins-hbase4.apache.org,36801,1685717955362}] 2023-06-02 14:59:16,289 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a. 2023-06-02 14:59:16,289 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cdd132f177c2617aff0760092dc0798a, NAME => 'hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a.', STARTKEY => '', ENDKEY => ''} 2023-06-02 14:59:16,289 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace cdd132f177c2617aff0760092dc0798a 2023-06-02 14:59:16,289 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:59:16,289 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cdd132f177c2617aff0760092dc0798a 2023-06-02 14:59:16,289 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cdd132f177c2617aff0760092dc0798a 2023-06-02 14:59:16,290 INFO [StoreOpener-cdd132f177c2617aff0760092dc0798a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region cdd132f177c2617aff0760092dc0798a 2023-06-02 14:59:16,292 DEBUG [StoreOpener-cdd132f177c2617aff0760092dc0798a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/namespace/cdd132f177c2617aff0760092dc0798a/info 2023-06-02 14:59:16,292 DEBUG [StoreOpener-cdd132f177c2617aff0760092dc0798a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/namespace/cdd132f177c2617aff0760092dc0798a/info 2023-06-02 14:59:16,292 INFO [StoreOpener-cdd132f177c2617aff0760092dc0798a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cdd132f177c2617aff0760092dc0798a columnFamilyName info 2023-06-02 14:59:16,293 INFO [StoreOpener-cdd132f177c2617aff0760092dc0798a-1] regionserver.HStore(310): Store=cdd132f177c2617aff0760092dc0798a/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:59:16,294 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/namespace/cdd132f177c2617aff0760092dc0798a 2023-06-02 14:59:16,294 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/namespace/cdd132f177c2617aff0760092dc0798a 2023-06-02 14:59:16,296 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cdd132f177c2617aff0760092dc0798a 2023-06-02 14:59:16,298 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/namespace/cdd132f177c2617aff0760092dc0798a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-02 14:59:16,298 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cdd132f177c2617aff0760092dc0798a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=742140, jitterRate=-0.05632062256336212}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-02 14:59:16,299 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cdd132f177c2617aff0760092dc0798a: 2023-06-02 14:59:16,300 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a., pid=6, masterSystemTime=1685717956285 2023-06-02 14:59:16,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a. 2023-06-02 14:59:16,302 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a. 2023-06-02 14:59:16,303 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=cdd132f177c2617aff0760092dc0798a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:16,303 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685717956303"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685717956303"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685717956303"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685717956303"}]},"ts":"1685717956303"} 2023-06-02 14:59:16,307 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-02 14:59:16,307 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure cdd132f177c2617aff0760092dc0798a, server=jenkins-hbase4.apache.org,36801,1685717955362 in 172 msec 2023-06-02 14:59:16,310 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-02 14:59:16,310 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=cdd132f177c2617aff0760092dc0798a, ASSIGN in 331 msec 2023-06-02 14:59:16,311 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-02 14:59:16,311 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685717956311"}]},"ts":"1685717956311"} 2023-06-02 14:59:16,313 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-02 14:59:16,315 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-02 14:59:16,317 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 384 msec 2023-06-02 14:59:16,333 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-02 14:59:16,334 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-02 14:59:16,334 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:59:16,338 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-02 14:59:16,347 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-02 14:59:16,350 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-06-02 14:59:16,360 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-02 14:59:16,367 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-02 14:59:16,370 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 10 msec 2023-06-02 14:59:16,384 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-02 14:59:16,386 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-02 14:59:16,386 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.001sec 2023-06-02 14:59:16,386 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-02 14:59:16,386 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-02 14:59:16,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-02 14:59:16,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34025,1685717955301-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-02 14:59:16,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34025,1685717955301-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-02 14:59:16,388 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-02 14:59:16,478 DEBUG [Listener at localhost/44673] zookeeper.ReadOnlyZKClient(139): Connect 0x531345f9 to 127.0.0.1:51040 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-02 14:59:16,483 DEBUG [Listener at localhost/44673] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@c50d16c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-02 14:59:16,485 DEBUG [hconnection-0x60adecd7-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-02 14:59:16,487 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36198, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-02 14:59:16,488 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,34025,1685717955301 2023-06-02 14:59:16,488 INFO [Listener at localhost/44673] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 14:59:16,491 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-02 14:59:16,491 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 14:59:16,492 INFO [Listener at localhost/44673] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-02 14:59:16,494 DEBUG [Listener at localhost/44673] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-02 14:59:16,496 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55120, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-02 14:59:16,497 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-02 14:59:16,497 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-02 14:59:16,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-02 14:59:16,499 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:16,501 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-06-02 14:59:16,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testCompactionRecordDoesntBlockRolling" procId is: 9 2023-06-02 14:59:16,502 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-02 14:59:16,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-02 14:59:16,503 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e 2023-06-02 14:59:16,504 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e empty. 2023-06-02 14:59:16,504 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e 2023-06-02 14:59:16,504 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testCompactionRecordDoesntBlockRolling regions 2023-06-02 14:59:16,518 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/.tabledesc/.tableinfo.0000000001 2023-06-02 14:59:16,519 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 19be1aa46c375c439d92ad46e552705e, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/.tmp 2023-06-02 14:59:16,526 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:59:16,526 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1604): Closing 19be1aa46c375c439d92ad46e552705e, disabling compactions & flushes 2023-06-02 14:59:16,527 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. 2023-06-02 14:59:16,527 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. 2023-06-02 14:59:16,527 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. after waiting 0 ms 2023-06-02 14:59:16,527 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. 2023-06-02 14:59:16,527 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. 2023-06-02 14:59:16,527 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1558): Region close journal for 19be1aa46c375c439d92ad46e552705e: 2023-06-02 14:59:16,529 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ADD_TO_META 2023-06-02 14:59:16,530 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685717956530"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685717956530"}]},"ts":"1685717956530"} 2023-06-02 14:59:16,532 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-02 14:59:16,533 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-02 14:59:16,533 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685717956533"}]},"ts":"1685717956533"} 2023-06-02 14:59:16,534 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLING in hbase:meta 2023-06-02 14:59:16,540 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=19be1aa46c375c439d92ad46e552705e, ASSIGN}] 2023-06-02 14:59:16,541 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=19be1aa46c375c439d92ad46e552705e, ASSIGN 2023-06-02 14:59:16,542 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=19be1aa46c375c439d92ad46e552705e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36801,1685717955362; forceNewPlan=false, retain=false 2023-06-02 14:59:16,693 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=19be1aa46c375c439d92ad46e552705e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:16,694 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685717956693"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685717956693"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685717956693"}]},"ts":"1685717956693"} 2023-06-02 14:59:16,696 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 19be1aa46c375c439d92ad46e552705e, server=jenkins-hbase4.apache.org,36801,1685717955362}] 2023-06-02 14:59:16,852 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. 2023-06-02 14:59:16,852 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 19be1aa46c375c439d92ad46e552705e, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e.', STARTKEY => '', ENDKEY => ''} 2023-06-02 14:59:16,852 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testCompactionRecordDoesntBlockRolling 19be1aa46c375c439d92ad46e552705e 2023-06-02 14:59:16,852 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 14:59:16,852 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 19be1aa46c375c439d92ad46e552705e 2023-06-02 14:59:16,852 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 19be1aa46c375c439d92ad46e552705e 2023-06-02 14:59:16,854 INFO [StoreOpener-19be1aa46c375c439d92ad46e552705e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 19be1aa46c375c439d92ad46e552705e 2023-06-02 14:59:16,855 DEBUG [StoreOpener-19be1aa46c375c439d92ad46e552705e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info 2023-06-02 14:59:16,855 DEBUG [StoreOpener-19be1aa46c375c439d92ad46e552705e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info 2023-06-02 14:59:16,855 INFO [StoreOpener-19be1aa46c375c439d92ad46e552705e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 19be1aa46c375c439d92ad46e552705e columnFamilyName info 2023-06-02 14:59:16,856 INFO [StoreOpener-19be1aa46c375c439d92ad46e552705e-1] regionserver.HStore(310): Store=19be1aa46c375c439d92ad46e552705e/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 14:59:16,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e 2023-06-02 14:59:16,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e 2023-06-02 14:59:16,859 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 19be1aa46c375c439d92ad46e552705e 2023-06-02 14:59:16,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-02 14:59:16,862 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 19be1aa46c375c439d92ad46e552705e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=834441, jitterRate=0.06104755401611328}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-02 14:59:16,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 19be1aa46c375c439d92ad46e552705e: 2023-06-02 14:59:16,863 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e., pid=11, masterSystemTime=1685717956848 2023-06-02 14:59:16,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. 2023-06-02 14:59:16,865 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. 2023-06-02 14:59:16,865 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=19be1aa46c375c439d92ad46e552705e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:16,866 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685717956865"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685717956865"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685717956865"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685717956865"}]},"ts":"1685717956865"} 2023-06-02 14:59:16,869 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-02 14:59:16,869 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 19be1aa46c375c439d92ad46e552705e, server=jenkins-hbase4.apache.org,36801,1685717955362 in 171 msec 2023-06-02 14:59:16,871 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-02 14:59:16,872 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=19be1aa46c375c439d92ad46e552705e, ASSIGN in 329 msec 2023-06-02 14:59:16,872 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-02 14:59:16,872 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685717956872"}]},"ts":"1685717956872"} 2023-06-02 14:59:16,874 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLED in hbase:meta 2023-06-02 14:59:16,877 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_POST_OPERATION 2023-06-02 14:59:16,878 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling in 379 msec 2023-06-02 14:59:19,341 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-02 14:59:21,610 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-02 14:59:21,610 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-02 14:59:21,611 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-02 14:59:26,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-02 14:59:26,503 INFO [Listener at localhost/44673] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testCompactionRecordDoesntBlockRolling, procId: 9 completed 2023-06-02 14:59:26,506 DEBUG [Listener at localhost/44673] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:26,506 DEBUG [Listener at localhost/44673] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. 2023-06-02 14:59:26,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-06-02 14:59:26,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] procedure.ProcedureCoordinator(165): Submitting procedure hbase:namespace 2023-06-02 14:59:26,526 INFO [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'hbase:namespace' 2023-06-02 14:59:26,526 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-02 14:59:26,527 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'hbase:namespace' starting 'acquire' 2023-06-02 14:59:26,527 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'hbase:namespace', kicking off acquire phase on members. 2023-06-02 14:59:26,527 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-06-02 14:59:26,527 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-02 14:59:26,529 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-02 14:59:26,529 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:26,529 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-02 14:59:26,529 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-02 14:59:26,529 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:26,529 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-02 14:59:26,529 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/hbase:namespace 2023-06-02 14:59:26,530 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-06-02 14:59:26,530 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-02 14:59:26,530 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-02 14:59:26,531 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for hbase:namespace 2023-06-02 14:59:26,532 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:hbase:namespace 2023-06-02 14:59:26,532 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'hbase:namespace' with timeout 60000ms 2023-06-02 14:59:26,533 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-02 14:59:26,533 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'hbase:namespace' starting 'acquire' stage 2023-06-02 14:59:26,534 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-02 14:59:26,534 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-02 14:59:26,534 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a. 2023-06-02 14:59:26,534 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a. started... 2023-06-02 14:59:26,535 INFO [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing cdd132f177c2617aff0760092dc0798a 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-02 14:59:26,547 INFO [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/namespace/cdd132f177c2617aff0760092dc0798a/.tmp/info/f2412b5fcea34423a9a3d1a61f881063 2023-06-02 14:59:26,556 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/namespace/cdd132f177c2617aff0760092dc0798a/.tmp/info/f2412b5fcea34423a9a3d1a61f881063 as hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/namespace/cdd132f177c2617aff0760092dc0798a/info/f2412b5fcea34423a9a3d1a61f881063 2023-06-02 14:59:26,561 INFO [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/namespace/cdd132f177c2617aff0760092dc0798a/info/f2412b5fcea34423a9a3d1a61f881063, entries=2, sequenceid=6, filesize=4.8 K 2023-06-02 14:59:26,562 INFO [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for cdd132f177c2617aff0760092dc0798a in 27ms, sequenceid=6, compaction requested=false 2023-06-02 14:59:26,562 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for cdd132f177c2617aff0760092dc0798a: 2023-06-02 14:59:26,562 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a. 2023-06-02 14:59:26,563 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-02 14:59:26,563 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-02 14:59:26,563 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:26,563 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'hbase:namespace' locally acquired 2023-06-02 14:59:26,563 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,36801,1685717955362' joining acquired barrier for procedure (hbase:namespace) in zk 2023-06-02 14:59:26,565 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:26,565 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-02 14:59:26,565 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:26,565 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-02 14:59:26,565 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-02 14:59:26,565 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace 2023-06-02 14:59:26,565 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'hbase:namespace' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-02 14:59:26,566 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-02 14:59:26,566 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-02 14:59:26,566 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-02 14:59:26,566 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:26,567 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-02 14:59:26,567 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,36801,1685717955362' joining acquired barrier for procedure 'hbase:namespace' on coordinator 2023-06-02 14:59:26,567 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'hbase:namespace' starting 'in-barrier' execution. 2023-06-02 14:59:26,567 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@7158269c[Count = 0] remaining members to acquire global barrier 2023-06-02 14:59:26,567 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-02 14:59:26,570 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-06-02 14:59:26,570 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-02 14:59:26,570 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-02 14:59:26,570 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'hbase:namespace' received 'reached' from coordinator. 2023-06-02 14:59:26,570 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:26,570 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-02 14:59:26,570 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'hbase:namespace' locally completed 2023-06-02 14:59:26,570 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'hbase:namespace' completed for member 'jenkins-hbase4.apache.org,36801,1685717955362' in zk 2023-06-02 14:59:26,572 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:26,572 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'hbase:namespace' has notified controller of completion 2023-06-02 14:59:26,572 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:26,572 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-02 14:59:26,572 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-02 14:59:26,572 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-02 14:59:26,572 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'hbase:namespace' completed. 2023-06-02 14:59:26,573 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-02 14:59:26,573 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-02 14:59:26,573 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-02 14:59:26,573 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:26,574 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-02 14:59:26,574 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-02 14:59:26,574 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:26,574 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'hbase:namespace' member 'jenkins-hbase4.apache.org,36801,1685717955362': 2023-06-02 14:59:26,575 INFO [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'hbase:namespace' execution completed 2023-06-02 14:59:26,575 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,36801,1685717955362' released barrier for procedure'hbase:namespace', counting down latch. Waiting for 0 more 2023-06-02 14:59:26,575 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-02 14:59:26,575 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-02 14:59:26,575 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:hbase:namespace 2023-06-02 14:59:26,575 INFO [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure hbase:namespaceincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-02 14:59:26,577 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-06-02 14:59:26,577 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-06-02 14:59:26,577 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/hbase:namespace 2023-06-02 14:59:26,577 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/hbase:namespace 2023-06-02 14:59:26,577 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-02 14:59:26,577 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-06-02 14:59:26,577 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-02 14:59:26,577 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-02 14:59:26,577 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-02 14:59:26,577 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-02 14:59:26,577 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-02 14:59:26,577 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:26,577 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-06-02 14:59:26,578 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-02 14:59:26,578 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-02 14:59:26,578 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-02 14:59:26,578 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:26,578 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:26,579 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-02 14:59:26,579 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-02 14:59:26,579 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:26,584 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:26,585 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-02 14:59:26,585 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-02 14:59:26,585 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-02 14:59:26,585 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-02 14:59:26,585 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-02 14:59:26,585 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:26,585 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-02 14:59:26,585 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-06-02 14:59:26,585 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-06-02 14:59:26,585 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-06-02 14:59:26,585 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-02 14:59:26,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'hbase:namespace' 2023-06-02 14:59:26,586 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-02 14:59:26,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-02 14:59:26,586 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-02 14:59:26,588 DEBUG [Listener at localhost/44673] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : hbase:namespace'' to complete. (max 20000 ms per retry) 2023-06-02 14:59:26,588 DEBUG [Listener at localhost/44673] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-02 14:59:36,588 DEBUG [Listener at localhost/44673] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-02 14:59:36,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-02 14:59:36,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-06-02 14:59:36,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,605 INFO [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-02 14:59:36,605 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-02 14:59:36,605 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-02 14:59:36,606 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-02 14:59:36,606 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,606 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,607 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-02 14:59:36,607 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:36,607 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-02 14:59:36,607 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-02 14:59:36,608 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:36,608 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-02 14:59:36,608 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,608 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,608 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-02 14:59:36,608 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,608 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,608 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,609 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-02 14:59:36,609 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-02 14:59:36,609 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-02 14:59:36,609 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-02 14:59:36,609 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-02 14:59:36,609 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. 2023-06-02 14:59:36,609 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. started... 2023-06-02 14:59:36,610 INFO [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 19be1aa46c375c439d92ad46e552705e 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-02 14:59:36,621 INFO [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=5 (bloomFilter=true), to=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/.tmp/info/2dd8c108f75440d7a3e833c60adeefc3 2023-06-02 14:59:36,629 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/.tmp/info/2dd8c108f75440d7a3e833c60adeefc3 as hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info/2dd8c108f75440d7a3e833c60adeefc3 2023-06-02 14:59:36,635 INFO [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info/2dd8c108f75440d7a3e833c60adeefc3, entries=1, sequenceid=5, filesize=5.8 K 2023-06-02 14:59:36,636 INFO [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 19be1aa46c375c439d92ad46e552705e in 26ms, sequenceid=5, compaction requested=false 2023-06-02 14:59:36,637 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 19be1aa46c375c439d92ad46e552705e: 2023-06-02 14:59:36,637 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. 2023-06-02 14:59:36,637 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-02 14:59:36,637 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-02 14:59:36,637 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:36,637 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-02 14:59:36,637 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,36801,1685717955362' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-02 14:59:36,640 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,640 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:36,640 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:36,641 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-02 14:59:36,641 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-02 14:59:36,641 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,641 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-02 14:59:36,641 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-02 14:59:36,641 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-02 14:59:36,641 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,642 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:36,642 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-02 14:59:36,642 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,36801,1685717955362' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-02 14:59:36,642 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@7567bc6f[Count = 0] remaining members to acquire global barrier 2023-06-02 14:59:36,642 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-02 14:59:36,642 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,643 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,643 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,643 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,644 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-02 14:59:36,644 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:36,644 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-02 14:59:36,644 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,36801,1685717955362' in zk 2023-06-02 14:59:36,644 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-02 14:59:36,645 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-02 14:59:36,645 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:36,645 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-02 14:59:36,645 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:36,645 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-02 14:59:36,646 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-02 14:59:36,646 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-02 14:59:36,646 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-02 14:59:36,646 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-02 14:59:36,647 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,647 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:36,647 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-02 14:59:36,647 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,648 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:36,648 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,36801,1685717955362': 2023-06-02 14:59:36,648 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,36801,1685717955362' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-02 14:59:36,648 INFO [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-02 14:59:36,648 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-02 14:59:36,648 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-02 14:59:36,648 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,648 INFO [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-02 14:59:36,654 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,654 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,654 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,654 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,654 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-02 14:59:36,654 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-02 14:59:36,654 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-02 14:59:36,654 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,655 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:36,655 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-02 14:59:36,655 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-02 14:59:36,655 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-02 14:59:36,655 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,655 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,656 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-02 14:59:36,656 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,656 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:36,656 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:36,656 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-02 14:59:36,657 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,657 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:36,660 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:36,660 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-02 14:59:36,660 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,660 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-02 14:59:36,660 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-02 14:59:36,660 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-02 14:59:36,660 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-02 14:59:36,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-02 14:59:36,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-02 14:59:36,660 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,660 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:36,660 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,660 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,660 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:36,661 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-02 14:59:36,661 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-02 14:59:36,661 DEBUG [Listener at localhost/44673] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-02 14:59:36,661 DEBUG [Listener at localhost/44673] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-02 14:59:46,661 DEBUG [Listener at localhost/44673] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-02 14:59:46,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-02 14:59:46,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-06-02 14:59:46,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-06-02 14:59:46,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,671 INFO [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-02 14:59:46,672 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-02 14:59:46,672 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-02 14:59:46,672 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-02 14:59:46,672 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,672 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,674 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-02 14:59:46,675 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:46,675 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-02 14:59:46,675 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-02 14:59:46,675 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:46,675 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,675 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-02 14:59:46,675 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,676 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-02 14:59:46,676 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,676 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,676 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-06-02 14:59:46,676 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,676 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-02 14:59:46,676 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-02 14:59:46,676 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-02 14:59:46,676 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-02 14:59:46,677 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-02 14:59:46,677 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. 2023-06-02 14:59:46,677 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. started... 2023-06-02 14:59:46,677 INFO [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 19be1aa46c375c439d92ad46e552705e 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-02 14:59:46,686 INFO [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/.tmp/info/b26fb1d735be4ff59cb24b1531d7a017 2023-06-02 14:59:46,695 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/.tmp/info/b26fb1d735be4ff59cb24b1531d7a017 as hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info/b26fb1d735be4ff59cb24b1531d7a017 2023-06-02 14:59:46,701 INFO [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info/b26fb1d735be4ff59cb24b1531d7a017, entries=1, sequenceid=9, filesize=5.8 K 2023-06-02 14:59:46,701 INFO [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 19be1aa46c375c439d92ad46e552705e in 24ms, sequenceid=9, compaction requested=false 2023-06-02 14:59:46,702 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 19be1aa46c375c439d92ad46e552705e: 2023-06-02 14:59:46,702 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. 2023-06-02 14:59:46,702 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-02 14:59:46,702 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-02 14:59:46,702 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:46,702 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-02 14:59:46,702 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,36801,1685717955362' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-02 14:59:46,704 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:46,704 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,704 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:46,704 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-02 14:59:46,704 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-02 14:59:46,704 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,704 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-02 14:59:46,704 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-02 14:59:46,704 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-02 14:59:46,705 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,705 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:46,705 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-02 14:59:46,706 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,36801,1685717955362' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-02 14:59:46,706 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@30652ca7[Count = 0] remaining members to acquire global barrier 2023-06-02 14:59:46,706 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-02 14:59:46,706 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,707 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,707 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,708 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,708 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-02 14:59:46,708 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-02 14:59:46,708 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:46,708 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-02 14:59:46,708 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,36801,1685717955362' in zk 2023-06-02 14:59:46,709 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-02 14:59:46,710 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:46,710 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-02 14:59:46,710 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:46,710 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-02 14:59:46,710 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-02 14:59:46,710 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-02 14:59:46,710 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-02 14:59:46,711 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-02 14:59:46,711 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,711 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:46,711 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-02 14:59:46,711 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,712 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:46,712 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,36801,1685717955362': 2023-06-02 14:59:46,712 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,36801,1685717955362' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-02 14:59:46,712 INFO [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-02 14:59:46,712 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-02 14:59:46,712 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-02 14:59:46,712 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,712 INFO [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-02 14:59:46,714 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,714 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,714 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,714 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-02 14:59:46,714 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-02 14:59:46,714 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-02 14:59:46,714 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,714 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,714 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-02 14:59:46,714 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-02 14:59:46,714 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:46,714 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-02 14:59:46,715 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,715 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,715 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-02 14:59:46,715 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,716 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:46,716 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:46,716 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-02 14:59:46,716 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,717 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:46,719 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:46,719 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,719 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-02 14:59:46,719 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,719 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-02 14:59:46,719 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-02 14:59:46,719 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-02 14:59:46,719 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-02 14:59:46,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-02 14:59:46,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-02 14:59:46,719 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:46,720 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,720 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,720 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-06-02 14:59:46,720 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:46,720 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-02 14:59:46,720 DEBUG [Listener at localhost/44673] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-02 14:59:46,720 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-02 14:59:46,720 DEBUG [Listener at localhost/44673] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-02 14:59:56,720 DEBUG [Listener at localhost/44673] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-02 14:59:56,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-02 14:59:56,734 INFO [Listener at localhost/44673] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/WALs/jenkins-hbase4.apache.org,36801,1685717955362/jenkins-hbase4.apache.org%2C36801%2C1685717955362.1685717955743 with entries=13, filesize=6.44 KB; new WAL /user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/WALs/jenkins-hbase4.apache.org,36801,1685717955362/jenkins-hbase4.apache.org%2C36801%2C1685717955362.1685717996723 2023-06-02 14:59:56,734 DEBUG [Listener at localhost/44673] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40699,DS-cba1ab3d-3acc-4eb2-91e4-ef931524fd81,DISK], DatanodeInfoWithStorage[127.0.0.1:39585,DS-9dad7ae9-015a-4543-98b4-a34247417bb6,DISK]] 2023-06-02 14:59:56,734 DEBUG [Listener at localhost/44673] wal.AbstractFSWAL(716): hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/WALs/jenkins-hbase4.apache.org,36801,1685717955362/jenkins-hbase4.apache.org%2C36801%2C1685717955362.1685717955743 is not closed yet, will try archiving it next time 2023-06-02 14:59:56,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-06-02 14:59:56,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-06-02 14:59:56,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,742 INFO [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-02 14:59:56,742 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-02 14:59:56,742 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-02 14:59:56,742 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-02 14:59:56,743 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,743 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,744 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:56,744 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-02 14:59:56,744 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-02 14:59:56,744 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-02 14:59:56,744 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:56,744 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-02 14:59:56,744 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,745 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,745 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-02 14:59:56,745 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,745 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,745 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-06-02 14:59:56,745 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,745 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-02 14:59:56,746 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-02 14:59:56,746 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-02 14:59:56,746 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-02 14:59:56,746 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-02 14:59:56,746 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. 2023-06-02 14:59:56,747 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. started... 2023-06-02 14:59:56,747 INFO [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 19be1aa46c375c439d92ad46e552705e 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-02 14:59:56,761 INFO [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=13 (bloomFilter=true), to=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/.tmp/info/84e2c88a3e374b67a2f2849f59d5c99b 2023-06-02 14:59:56,768 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/.tmp/info/84e2c88a3e374b67a2f2849f59d5c99b as hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info/84e2c88a3e374b67a2f2849f59d5c99b 2023-06-02 14:59:56,774 INFO [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info/84e2c88a3e374b67a2f2849f59d5c99b, entries=1, sequenceid=13, filesize=5.8 K 2023-06-02 14:59:56,776 INFO [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 19be1aa46c375c439d92ad46e552705e in 29ms, sequenceid=13, compaction requested=true 2023-06-02 14:59:56,776 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 19be1aa46c375c439d92ad46e552705e: 2023-06-02 14:59:56,776 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. 2023-06-02 14:59:56,776 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-02 14:59:56,776 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-02 14:59:56,776 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:56,776 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-02 14:59:56,776 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,36801,1685717955362' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-02 14:59:56,778 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,778 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:56,778 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:56,778 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-02 14:59:56,778 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-02 14:59:56,778 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,778 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-02 14:59:56,779 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-02 14:59:56,779 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-02 14:59:56,779 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,779 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:56,780 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-02 14:59:56,780 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,36801,1685717955362' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-02 14:59:56,780 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@31c3329b[Count = 0] remaining members to acquire global barrier 2023-06-02 14:59:56,780 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-02 14:59:56,780 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,781 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,781 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,781 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,781 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-02 14:59:56,781 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-02 14:59:56,781 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,36801,1685717955362' in zk 2023-06-02 14:59:56,781 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:56,781 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-02 14:59:56,784 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-02 14:59:56,784 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:56,784 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-02 14:59:56,784 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:56,785 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-02 14:59:56,785 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-02 14:59:56,784 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-02 14:59:56,785 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-02 14:59:56,785 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-02 14:59:56,786 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,786 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:56,786 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-02 14:59:56,786 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,787 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:56,787 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,36801,1685717955362': 2023-06-02 14:59:56,787 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,36801,1685717955362' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-02 14:59:56,787 INFO [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-02 14:59:56,787 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-02 14:59:56,787 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-02 14:59:56,787 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,787 INFO [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-02 14:59:56,789 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,789 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,789 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,789 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,789 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-02 14:59:56,789 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-02 14:59:56,789 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-02 14:59:56,789 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,789 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-02 14:59:56,789 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-02 14:59:56,789 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:56,789 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-02 14:59:56,790 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,790 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,790 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-02 14:59:56,791 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,791 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:56,791 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:56,791 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-02 14:59:56,791 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,792 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:56,794 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:56,794 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-02 14:59:56,795 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,795 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-02 14:59:56,795 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-02 14:59:56,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-02 14:59:56,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-02 14:59:56,795 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,795 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-02 14:59:56,795 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-02 14:59:56,795 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 14:59:56,795 DEBUG [Listener at localhost/44673] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-02 14:59:56,796 DEBUG [Listener at localhost/44673] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-02 14:59:56,796 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,796 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,796 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 14:59:56,796 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-02 14:59:56,796 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-02 15:00:06,796 DEBUG [Listener at localhost/44673] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-02 15:00:06,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-02 15:00:06,797 DEBUG [Listener at localhost/44673] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-02 15:00:06,802 DEBUG [Listener at localhost/44673] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 17769 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-02 15:00:06,802 DEBUG [Listener at localhost/44673] regionserver.HStore(1912): 19be1aa46c375c439d92ad46e552705e/info is initiating minor compaction (all files) 2023-06-02 15:00:06,802 INFO [Listener at localhost/44673] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-02 15:00:06,802 INFO [Listener at localhost/44673] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-02 15:00:06,803 INFO [Listener at localhost/44673] regionserver.HRegion(2259): Starting compaction of 19be1aa46c375c439d92ad46e552705e/info in TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. 2023-06-02 15:00:06,803 INFO [Listener at localhost/44673] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info/2dd8c108f75440d7a3e833c60adeefc3, hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info/b26fb1d735be4ff59cb24b1531d7a017, hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info/84e2c88a3e374b67a2f2849f59d5c99b] into tmpdir=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/.tmp, totalSize=17.4 K 2023-06-02 15:00:06,803 DEBUG [Listener at localhost/44673] compactions.Compactor(207): Compacting 2dd8c108f75440d7a3e833c60adeefc3, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=5, earliestPutTs=1685717976598 2023-06-02 15:00:06,803 DEBUG [Listener at localhost/44673] compactions.Compactor(207): Compacting b26fb1d735be4ff59cb24b1531d7a017, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=9, earliestPutTs=1685717986663 2023-06-02 15:00:06,804 DEBUG [Listener at localhost/44673] compactions.Compactor(207): Compacting 84e2c88a3e374b67a2f2849f59d5c99b, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=13, earliestPutTs=1685717996722 2023-06-02 15:00:06,814 INFO [Listener at localhost/44673] throttle.PressureAwareThroughputController(145): 19be1aa46c375c439d92ad46e552705e#info#compaction#19 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-02 15:00:06,832 DEBUG [Listener at localhost/44673] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/.tmp/info/3217ddc34dff46199f1c62a8b1b8ffab as hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info/3217ddc34dff46199f1c62a8b1b8ffab 2023-06-02 15:00:06,838 INFO [Listener at localhost/44673] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 19be1aa46c375c439d92ad46e552705e/info of 19be1aa46c375c439d92ad46e552705e into 3217ddc34dff46199f1c62a8b1b8ffab(size=8.0 K), total size for store is 8.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-02 15:00:06,839 DEBUG [Listener at localhost/44673] regionserver.HRegion(2289): Compaction status journal for 19be1aa46c375c439d92ad46e552705e: 2023-06-02 15:00:06,848 INFO [Listener at localhost/44673] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/WALs/jenkins-hbase4.apache.org,36801,1685717955362/jenkins-hbase4.apache.org%2C36801%2C1685717955362.1685717996723 with entries=4, filesize=2.45 KB; new WAL /user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/WALs/jenkins-hbase4.apache.org,36801,1685717955362/jenkins-hbase4.apache.org%2C36801%2C1685717955362.1685718006840 2023-06-02 15:00:06,849 DEBUG [Listener at localhost/44673] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40699,DS-cba1ab3d-3acc-4eb2-91e4-ef931524fd81,DISK], DatanodeInfoWithStorage[127.0.0.1:39585,DS-9dad7ae9-015a-4543-98b4-a34247417bb6,DISK]] 2023-06-02 15:00:06,849 DEBUG [Listener at localhost/44673] wal.AbstractFSWAL(716): hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/WALs/jenkins-hbase4.apache.org,36801,1685717955362/jenkins-hbase4.apache.org%2C36801%2C1685717955362.1685717996723 is not closed yet, will try archiving it next time 2023-06-02 15:00:06,849 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/WALs/jenkins-hbase4.apache.org,36801,1685717955362/jenkins-hbase4.apache.org%2C36801%2C1685717955362.1685717955743 to hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/oldWALs/jenkins-hbase4.apache.org%2C36801%2C1685717955362.1685717955743 2023-06-02 15:00:06,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-06-02 15:00:06,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-06-02 15:00:06,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,855 INFO [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-02 15:00:06,856 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-02 15:00:06,856 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-02 15:00:06,856 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-02 15:00:06,856 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,857 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,862 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 15:00:06,862 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-02 15:00:06,862 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-02 15:00:06,862 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-02 15:00:06,862 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 15:00:06,862 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-02 15:00:06,862 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,862 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,863 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-02 15:00:06,863 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,863 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,863 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-06-02 15:00:06,863 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,863 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-02 15:00:06,863 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-02 15:00:06,864 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-02 15:00:06,864 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-02 15:00:06,864 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-02 15:00:06,864 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. 2023-06-02 15:00:06,864 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. started... 2023-06-02 15:00:06,864 INFO [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 19be1aa46c375c439d92ad46e552705e 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-02 15:00:06,876 INFO [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=18 (bloomFilter=true), to=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/.tmp/info/f0c45e97b8e9479aaaa458210931ed37 2023-06-02 15:00:06,881 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/.tmp/info/f0c45e97b8e9479aaaa458210931ed37 as hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info/f0c45e97b8e9479aaaa458210931ed37 2023-06-02 15:00:06,887 INFO [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info/f0c45e97b8e9479aaaa458210931ed37, entries=1, sequenceid=18, filesize=5.8 K 2023-06-02 15:00:06,887 INFO [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 19be1aa46c375c439d92ad46e552705e in 23ms, sequenceid=18, compaction requested=false 2023-06-02 15:00:06,888 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 19be1aa46c375c439d92ad46e552705e: 2023-06-02 15:00:06,888 DEBUG [rs(jenkins-hbase4.apache.org,36801,1685717955362)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. 2023-06-02 15:00:06,888 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-02 15:00:06,888 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-02 15:00:06,888 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 15:00:06,888 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-02 15:00:06,888 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,36801,1685717955362' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-02 15:00:06,890 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,890 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 15:00:06,890 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 15:00:06,890 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-02 15:00:06,890 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-02 15:00:06,890 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,890 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-02 15:00:06,891 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-02 15:00:06,891 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-02 15:00:06,891 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,891 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 15:00:06,892 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-02 15:00:06,892 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,36801,1685717955362' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-02 15:00:06,892 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@19f9ef9[Count = 0] remaining members to acquire global barrier 2023-06-02 15:00:06,892 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-02 15:00:06,892 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,893 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,893 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,893 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,893 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-02 15:00:06,893 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-02 15:00:06,893 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 15:00:06,894 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-02 15:00:06,894 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,36801,1685717955362' in zk 2023-06-02 15:00:06,895 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-02 15:00:06,895 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 15:00:06,895 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-02 15:00:06,895 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 15:00:06,896 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-02 15:00:06,896 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-02 15:00:06,895 DEBUG [member: 'jenkins-hbase4.apache.org,36801,1685717955362' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-02 15:00:06,896 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-02 15:00:06,897 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-02 15:00:06,897 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,897 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 15:00:06,898 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-02 15:00:06,898 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,898 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 15:00:06,898 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,36801,1685717955362': 2023-06-02 15:00:06,899 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,36801,1685717955362' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-02 15:00:06,899 INFO [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-02 15:00:06,899 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-02 15:00:06,899 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-02 15:00:06,899 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,899 INFO [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-02 15:00:06,901 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,901 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,901 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,902 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-02 15:00:06,902 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-02 15:00:06,902 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,902 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-02 15:00:06,902 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,902 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-02 15:00:06,902 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-02 15:00:06,902 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-02 15:00:06,902 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 15:00:06,902 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,903 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,903 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-02 15:00:06,903 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,903 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 15:00:06,903 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 15:00:06,904 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-02 15:00:06,904 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,904 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 15:00:06,907 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 15:00:06,907 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-02 15:00:06,907 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-02 15:00:06,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-02 15:00:06,907 DEBUG [(jenkins-hbase4.apache.org,34025,1685717955301)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-02 15:00:06,907 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-02 15:00:06,907 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-02 15:00:06,908 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-02 15:00:06,908 DEBUG [Listener at localhost/44673] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-02 15:00:06,908 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,908 DEBUG [Listener at localhost/44673] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-02 15:00:06,908 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 15:00:06,909 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,909 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-02 15:00:06,909 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-02 15:00:06,909 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:06,909 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-02 15:00:16,909 DEBUG [Listener at localhost/44673] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-02 15:00:16,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34025] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-02 15:00:16,919 INFO [Listener at localhost/44673] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/WALs/jenkins-hbase4.apache.org,36801,1685717955362/jenkins-hbase4.apache.org%2C36801%2C1685717955362.1685718006840 with entries=3, filesize=1.97 KB; new WAL /user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/WALs/jenkins-hbase4.apache.org,36801,1685717955362/jenkins-hbase4.apache.org%2C36801%2C1685717955362.1685718016912 2023-06-02 15:00:16,920 DEBUG [Listener at localhost/44673] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40699,DS-cba1ab3d-3acc-4eb2-91e4-ef931524fd81,DISK], DatanodeInfoWithStorage[127.0.0.1:39585,DS-9dad7ae9-015a-4543-98b4-a34247417bb6,DISK]] 2023-06-02 15:00:16,920 DEBUG [Listener at localhost/44673] wal.AbstractFSWAL(716): hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/WALs/jenkins-hbase4.apache.org,36801,1685717955362/jenkins-hbase4.apache.org%2C36801%2C1685717955362.1685718006840 is not closed yet, will try archiving it next time 2023-06-02 15:00:16,920 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/WALs/jenkins-hbase4.apache.org,36801,1685717955362/jenkins-hbase4.apache.org%2C36801%2C1685717955362.1685717996723 to hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/oldWALs/jenkins-hbase4.apache.org%2C36801%2C1685717955362.1685717996723 2023-06-02 15:00:16,920 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-02 15:00:16,920 INFO [Listener at localhost/44673] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-06-02 15:00:16,920 DEBUG [Listener at localhost/44673] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x531345f9 to 127.0.0.1:51040 2023-06-02 15:00:16,920 DEBUG [Listener at localhost/44673] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 15:00:16,921 DEBUG [Listener at localhost/44673] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-02 15:00:16,921 DEBUG [Listener at localhost/44673] util.JVMClusterUtil(257): Found active master hash=266487945, stopped=false 2023-06-02 15:00:16,921 INFO [Listener at localhost/44673] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,34025,1685717955301 2023-06-02 15:00:16,924 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-02 15:00:16,924 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-02 15:00:16,924 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 15:00:16,924 INFO [Listener at localhost/44673] procedure2.ProcedureExecutor(629): Stopping 2023-06-02 15:00:16,924 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 15:00:16,924 DEBUG [Listener at localhost/44673] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x356b32c6 to 127.0.0.1:51040 2023-06-02 15:00:16,925 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 15:00:16,925 DEBUG [Listener at localhost/44673] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 15:00:16,925 INFO [Listener at localhost/44673] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,36801,1685717955362' ***** 2023-06-02 15:00:16,925 INFO [Listener at localhost/44673] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-02 15:00:16,925 INFO [RS:0;jenkins-hbase4:36801] regionserver.HeapMemoryManager(220): Stopping 2023-06-02 15:00:16,925 INFO [RS:0;jenkins-hbase4:36801] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-02 15:00:16,925 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-02 15:00:16,925 INFO [RS:0;jenkins-hbase4:36801] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-02 15:00:16,926 INFO [RS:0;jenkins-hbase4:36801] regionserver.HRegionServer(3303): Received CLOSE for 19be1aa46c375c439d92ad46e552705e 2023-06-02 15:00:16,926 INFO [RS:0;jenkins-hbase4:36801] regionserver.HRegionServer(3303): Received CLOSE for cdd132f177c2617aff0760092dc0798a 2023-06-02 15:00:16,926 INFO [RS:0;jenkins-hbase4:36801] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 15:00:16,926 DEBUG [RS:0;jenkins-hbase4:36801] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x19ebacea to 127.0.0.1:51040 2023-06-02 15:00:16,926 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 19be1aa46c375c439d92ad46e552705e, disabling compactions & flushes 2023-06-02 15:00:16,926 DEBUG [RS:0;jenkins-hbase4:36801] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 15:00:16,926 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. 2023-06-02 15:00:16,926 INFO [RS:0;jenkins-hbase4:36801] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-02 15:00:16,926 INFO [RS:0;jenkins-hbase4:36801] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-02 15:00:16,927 INFO [RS:0;jenkins-hbase4:36801] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-02 15:00:16,926 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. 2023-06-02 15:00:16,927 INFO [RS:0;jenkins-hbase4:36801] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-02 15:00:16,927 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. after waiting 0 ms 2023-06-02 15:00:16,927 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. 2023-06-02 15:00:16,927 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 19be1aa46c375c439d92ad46e552705e 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-02 15:00:16,927 INFO [RS:0;jenkins-hbase4:36801] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-06-02 15:00:16,927 DEBUG [RS:0;jenkins-hbase4:36801] regionserver.HRegionServer(1478): Online Regions={19be1aa46c375c439d92ad46e552705e=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e., 1588230740=hbase:meta,,1.1588230740, cdd132f177c2617aff0760092dc0798a=hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a.} 2023-06-02 15:00:16,927 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-02 15:00:16,927 DEBUG [RS:0;jenkins-hbase4:36801] regionserver.HRegionServer(1504): Waiting on 1588230740, 19be1aa46c375c439d92ad46e552705e, cdd132f177c2617aff0760092dc0798a 2023-06-02 15:00:16,927 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-02 15:00:16,928 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-02 15:00:16,928 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-02 15:00:16,928 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-02 15:00:16,928 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.10 KB heapSize=5.61 KB 2023-06-02 15:00:16,942 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=22 (bloomFilter=true), to=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/.tmp/info/4c3f57dc69f248dcab8ec8d813c5ad77 2023-06-02 15:00:16,945 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.84 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/1588230740/.tmp/info/0e744276f492476fb9d482775b5a7d5e 2023-06-02 15:00:16,952 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/.tmp/info/4c3f57dc69f248dcab8ec8d813c5ad77 as hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info/4c3f57dc69f248dcab8ec8d813c5ad77 2023-06-02 15:00:16,957 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info/4c3f57dc69f248dcab8ec8d813c5ad77, entries=1, sequenceid=22, filesize=5.8 K 2023-06-02 15:00:16,958 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 19be1aa46c375c439d92ad46e552705e in 31ms, sequenceid=22, compaction requested=true 2023-06-02 15:00:16,961 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info/2dd8c108f75440d7a3e833c60adeefc3, hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info/b26fb1d735be4ff59cb24b1531d7a017, hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info/84e2c88a3e374b67a2f2849f59d5c99b] to archive 2023-06-02 15:00:16,962 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-02 15:00:16,965 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info/2dd8c108f75440d7a3e833c60adeefc3 to hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info/2dd8c108f75440d7a3e833c60adeefc3 2023-06-02 15:00:16,967 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info/b26fb1d735be4ff59cb24b1531d7a017 to hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info/b26fb1d735be4ff59cb24b1531d7a017 2023-06-02 15:00:16,968 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=264 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/1588230740/.tmp/table/4f1c88b7ac52417bb035e7ad6a882e94 2023-06-02 15:00:16,969 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info/84e2c88a3e374b67a2f2849f59d5c99b to hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/info/84e2c88a3e374b67a2f2849f59d5c99b 2023-06-02 15:00:16,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/19be1aa46c375c439d92ad46e552705e/recovered.edits/25.seqid, newMaxSeqId=25, maxSeqId=1 2023-06-02 15:00:16,982 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. 2023-06-02 15:00:16,982 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 19be1aa46c375c439d92ad46e552705e: 2023-06-02 15:00:16,982 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685717956497.19be1aa46c375c439d92ad46e552705e. 2023-06-02 15:00:16,982 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cdd132f177c2617aff0760092dc0798a, disabling compactions & flushes 2023-06-02 15:00:16,982 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a. 2023-06-02 15:00:16,982 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a. 2023-06-02 15:00:16,982 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a. after waiting 0 ms 2023-06-02 15:00:16,982 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a. 2023-06-02 15:00:16,986 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/1588230740/.tmp/info/0e744276f492476fb9d482775b5a7d5e as hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/1588230740/info/0e744276f492476fb9d482775b5a7d5e 2023-06-02 15:00:16,989 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/namespace/cdd132f177c2617aff0760092dc0798a/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-02 15:00:16,990 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a. 2023-06-02 15:00:16,990 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cdd132f177c2617aff0760092dc0798a: 2023-06-02 15:00:16,990 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685717955931.cdd132f177c2617aff0760092dc0798a. 2023-06-02 15:00:16,992 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/1588230740/info/0e744276f492476fb9d482775b5a7d5e, entries=20, sequenceid=14, filesize=7.6 K 2023-06-02 15:00:16,993 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/1588230740/.tmp/table/4f1c88b7ac52417bb035e7ad6a882e94 as hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/1588230740/table/4f1c88b7ac52417bb035e7ad6a882e94 2023-06-02 15:00:16,999 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/1588230740/table/4f1c88b7ac52417bb035e7ad6a882e94, entries=4, sequenceid=14, filesize=4.9 K 2023-06-02 15:00:16,999 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.10 KB/3174, heapSize ~5.33 KB/5456, currentSize=0 B/0 for 1588230740 in 71ms, sequenceid=14, compaction requested=false 2023-06-02 15:00:17,005 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-06-02 15:00:17,005 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-02 15:00:17,006 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-02 15:00:17,006 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-02 15:00:17,006 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-02 15:00:17,128 INFO [RS:0;jenkins-hbase4:36801] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36801,1685717955362; all regions closed. 2023-06-02 15:00:17,128 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/WALs/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 15:00:17,134 DEBUG [RS:0;jenkins-hbase4:36801] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/oldWALs 2023-06-02 15:00:17,134 INFO [RS:0;jenkins-hbase4:36801] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C36801%2C1685717955362.meta:.meta(num 1685717955879) 2023-06-02 15:00:17,135 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/WALs/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 15:00:17,140 DEBUG [RS:0;jenkins-hbase4:36801] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/oldWALs 2023-06-02 15:00:17,140 INFO [RS:0;jenkins-hbase4:36801] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C36801%2C1685717955362:(num 1685718016912) 2023-06-02 15:00:17,140 DEBUG [RS:0;jenkins-hbase4:36801] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 15:00:17,140 INFO [RS:0;jenkins-hbase4:36801] regionserver.LeaseManager(133): Closed leases 2023-06-02 15:00:17,141 INFO [RS:0;jenkins-hbase4:36801] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-06-02 15:00:17,141 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-02 15:00:17,141 INFO [RS:0;jenkins-hbase4:36801] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36801 2023-06-02 15:00:17,145 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36801,1685717955362 2023-06-02 15:00:17,145 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-02 15:00:17,145 ERROR [Listener at localhost/44673-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@4511523f rejected from java.util.concurrent.ThreadPoolExecutor@412ece45[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 34] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-06-02 15:00:17,146 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-02 15:00:17,146 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36801,1685717955362] 2023-06-02 15:00:17,146 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36801,1685717955362; numProcessing=1 2023-06-02 15:00:17,149 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36801,1685717955362 already deleted, retry=false 2023-06-02 15:00:17,149 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36801,1685717955362 expired; onlineServers=0 2023-06-02 15:00:17,149 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,34025,1685717955301' ***** 2023-06-02 15:00:17,149 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-02 15:00:17,149 DEBUG [M:0;jenkins-hbase4:34025] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@bddddad, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-02 15:00:17,149 INFO [M:0;jenkins-hbase4:34025] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34025,1685717955301 2023-06-02 15:00:17,149 INFO [M:0;jenkins-hbase4:34025] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34025,1685717955301; all regions closed. 2023-06-02 15:00:17,149 DEBUG [M:0;jenkins-hbase4:34025] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 15:00:17,149 DEBUG [M:0;jenkins-hbase4:34025] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-02 15:00:17,149 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-02 15:00:17,149 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685717955517] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685717955517,5,FailOnTimeoutGroup] 2023-06-02 15:00:17,149 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685717955517] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685717955517,5,FailOnTimeoutGroup] 2023-06-02 15:00:17,149 DEBUG [M:0;jenkins-hbase4:34025] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-02 15:00:17,151 INFO [M:0;jenkins-hbase4:34025] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-02 15:00:17,151 INFO [M:0;jenkins-hbase4:34025] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-02 15:00:17,151 INFO [M:0;jenkins-hbase4:34025] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-06-02 15:00:17,151 DEBUG [M:0;jenkins-hbase4:34025] master.HMaster(1512): Stopping service threads 2023-06-02 15:00:17,151 INFO [M:0;jenkins-hbase4:34025] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-02 15:00:17,151 ERROR [M:0;jenkins-hbase4:34025] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-02 15:00:17,151 INFO [M:0;jenkins-hbase4:34025] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-02 15:00:17,152 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-02 15:00:17,152 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-02 15:00:17,152 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 15:00:17,152 DEBUG [M:0;jenkins-hbase4:34025] zookeeper.ZKUtil(398): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-02 15:00:17,152 WARN [M:0;jenkins-hbase4:34025] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-02 15:00:17,152 INFO [M:0;jenkins-hbase4:34025] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-02 15:00:17,152 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-02 15:00:17,152 INFO [M:0;jenkins-hbase4:34025] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-02 15:00:17,153 DEBUG [M:0;jenkins-hbase4:34025] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-02 15:00:17,153 INFO [M:0;jenkins-hbase4:34025] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 15:00:17,153 DEBUG [M:0;jenkins-hbase4:34025] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 15:00:17,153 DEBUG [M:0;jenkins-hbase4:34025] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-02 15:00:17,153 DEBUG [M:0;jenkins-hbase4:34025] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 15:00:17,153 INFO [M:0;jenkins-hbase4:34025] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.89 KB heapSize=47.33 KB 2023-06-02 15:00:17,164 INFO [M:0;jenkins-hbase4:34025] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.89 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/15391802c13f41318cfd6b67fd30a831 2023-06-02 15:00:17,170 INFO [M:0;jenkins-hbase4:34025] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 15391802c13f41318cfd6b67fd30a831 2023-06-02 15:00:17,171 DEBUG [M:0;jenkins-hbase4:34025] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/15391802c13f41318cfd6b67fd30a831 as hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/15391802c13f41318cfd6b67fd30a831 2023-06-02 15:00:17,176 INFO [M:0;jenkins-hbase4:34025] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 15391802c13f41318cfd6b67fd30a831 2023-06-02 15:00:17,177 INFO [M:0;jenkins-hbase4:34025] regionserver.HStore(1080): Added hdfs://localhost:39605/user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/15391802c13f41318cfd6b67fd30a831, entries=11, sequenceid=100, filesize=6.1 K 2023-06-02 15:00:17,178 INFO [M:0;jenkins-hbase4:34025] regionserver.HRegion(2948): Finished flush of dataSize ~38.89 KB/39824, heapSize ~47.31 KB/48448, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=100, compaction requested=false 2023-06-02 15:00:17,179 INFO [M:0;jenkins-hbase4:34025] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 15:00:17,179 DEBUG [M:0;jenkins-hbase4:34025] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-02 15:00:17,179 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/0b178b19-2bc4-f8dc-f4db-b8a28daf4dbb/MasterData/WALs/jenkins-hbase4.apache.org,34025,1685717955301 2023-06-02 15:00:17,182 INFO [M:0;jenkins-hbase4:34025] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-02 15:00:17,182 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-02 15:00:17,183 INFO [M:0;jenkins-hbase4:34025] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34025 2023-06-02 15:00:17,185 DEBUG [M:0;jenkins-hbase4:34025] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,34025,1685717955301 already deleted, retry=false 2023-06-02 15:00:17,247 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 15:00:17,247 INFO [RS:0;jenkins-hbase4:36801] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36801,1685717955362; zookeeper connection closed. 2023-06-02 15:00:17,247 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008c0d35d90001, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 15:00:17,247 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7d73fa44] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7d73fa44 2023-06-02 15:00:17,247 INFO [Listener at localhost/44673] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-02 15:00:17,347 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 15:00:17,347 INFO [M:0;jenkins-hbase4:34025] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34025,1685717955301; zookeeper connection closed. 2023-06-02 15:00:17,347 DEBUG [Listener at localhost/44673-EventThread] zookeeper.ZKWatcher(600): master:34025-0x1008c0d35d90000, quorum=127.0.0.1:51040, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 15:00:17,348 WARN [Listener at localhost/44673] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-02 15:00:17,351 INFO [Listener at localhost/44673] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 15:00:17,455 WARN [BP-95134758-172.31.14.131-1685717954776 heartbeating to localhost/127.0.0.1:39605] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-02 15:00:17,455 WARN [BP-95134758-172.31.14.131-1685717954776 heartbeating to localhost/127.0.0.1:39605] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-95134758-172.31.14.131-1685717954776 (Datanode Uuid 6c1e2dd5-54ab-4d58-9ab7-8080887caecb) service to localhost/127.0.0.1:39605 2023-06-02 15:00:17,456 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/cluster_e4ab000b-5524-2133-9cd7-63cddd3d80e6/dfs/data/data3/current/BP-95134758-172.31.14.131-1685717954776] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 15:00:17,456 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/cluster_e4ab000b-5524-2133-9cd7-63cddd3d80e6/dfs/data/data4/current/BP-95134758-172.31.14.131-1685717954776] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 15:00:17,457 WARN [Listener at localhost/44673] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-02 15:00:17,460 INFO [Listener at localhost/44673] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 15:00:17,564 WARN [BP-95134758-172.31.14.131-1685717954776 heartbeating to localhost/127.0.0.1:39605] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-02 15:00:17,564 WARN [BP-95134758-172.31.14.131-1685717954776 heartbeating to localhost/127.0.0.1:39605] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-95134758-172.31.14.131-1685717954776 (Datanode Uuid 111a2ee7-ccde-4e3b-939b-b51826d48503) service to localhost/127.0.0.1:39605 2023-06-02 15:00:17,565 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/cluster_e4ab000b-5524-2133-9cd7-63cddd3d80e6/dfs/data/data1/current/BP-95134758-172.31.14.131-1685717954776] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 15:00:17,565 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/cluster_e4ab000b-5524-2133-9cd7-63cddd3d80e6/dfs/data/data2/current/BP-95134758-172.31.14.131-1685717954776] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 15:00:17,577 INFO [Listener at localhost/44673] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 15:00:17,618 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-02 15:00:17,689 INFO [Listener at localhost/44673] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-02 15:00:17,704 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-02 15:00:17,714 INFO [Listener at localhost/44673] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=92 (was 86) - Thread LEAK? -, OpenFileDescriptor=503 (was 462) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=51 (was 108), ProcessCount=170 (was 170), AvailableMemoryMB=578 (was 625) 2023-06-02 15:00:17,722 INFO [Listener at localhost/44673] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRolling Thread=93, OpenFileDescriptor=503, MaxFileDescriptor=60000, SystemLoadAverage=51, ProcessCount=170, AvailableMemoryMB=578 2023-06-02 15:00:17,722 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-02 15:00:17,722 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/hadoop.log.dir so I do NOT create it in target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81 2023-06-02 15:00:17,722 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d80182f1-fcb9-3dbe-cadb-2fea2abe5f34/hadoop.tmp.dir so I do NOT create it in target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81 2023-06-02 15:00:17,722 INFO [Listener at localhost/44673] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/cluster_1f14afc9-391e-9c70-7f14-bad7c89105c7, deleteOnExit=true 2023-06-02 15:00:17,722 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-02 15:00:17,723 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/test.cache.data in system properties and HBase conf 2023-06-02 15:00:17,723 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/hadoop.tmp.dir in system properties and HBase conf 2023-06-02 15:00:17,723 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/hadoop.log.dir in system properties and HBase conf 2023-06-02 15:00:17,723 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-02 15:00:17,723 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-02 15:00:17,723 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-02 15:00:17,723 DEBUG [Listener at localhost/44673] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-02 15:00:17,723 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-02 15:00:17,724 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-02 15:00:17,724 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-02 15:00:17,724 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-02 15:00:17,724 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-02 15:00:17,724 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-02 15:00:17,724 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-02 15:00:17,724 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-02 15:00:17,724 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-02 15:00:17,724 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/nfs.dump.dir in system properties and HBase conf 2023-06-02 15:00:17,724 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/java.io.tmpdir in system properties and HBase conf 2023-06-02 15:00:17,724 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-02 15:00:17,724 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-02 15:00:17,725 INFO [Listener at localhost/44673] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-02 15:00:17,726 WARN [Listener at localhost/44673] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-02 15:00:17,729 WARN [Listener at localhost/44673] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-02 15:00:17,729 WARN [Listener at localhost/44673] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-02 15:00:17,770 WARN [Listener at localhost/44673] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 15:00:17,772 INFO [Listener at localhost/44673] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 15:00:17,776 INFO [Listener at localhost/44673] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/java.io.tmpdir/Jetty_localhost_33539_hdfs____.gej6mz/webapp 2023-06-02 15:00:17,865 INFO [Listener at localhost/44673] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33539 2023-06-02 15:00:17,867 WARN [Listener at localhost/44673] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-02 15:00:17,869 WARN [Listener at localhost/44673] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-02 15:00:17,870 WARN [Listener at localhost/44673] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-02 15:00:17,912 WARN [Listener at localhost/45467] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 15:00:17,921 WARN [Listener at localhost/45467] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-02 15:00:17,923 WARN [Listener at localhost/45467] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 15:00:17,924 INFO [Listener at localhost/45467] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 15:00:17,929 INFO [Listener at localhost/45467] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/java.io.tmpdir/Jetty_localhost_39929_datanode____7qf5kx/webapp 2023-06-02 15:00:18,019 INFO [Listener at localhost/45467] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39929 2023-06-02 15:00:18,025 WARN [Listener at localhost/36941] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 15:00:18,035 WARN [Listener at localhost/36941] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-02 15:00:18,038 WARN [Listener at localhost/36941] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 15:00:18,039 INFO [Listener at localhost/36941] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 15:00:18,041 INFO [Listener at localhost/36941] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/java.io.tmpdir/Jetty_localhost_33979_datanode____gl6w2q/webapp 2023-06-02 15:00:18,116 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x412a00acad16307a: Processing first storage report for DS-3771fdbc-37b4-4ce1-b4f7-fc631e12f1bf from datanode 59522bee-33a2-4a4f-bda7-39e4c321eb00 2023-06-02 15:00:18,116 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x412a00acad16307a: from storage DS-3771fdbc-37b4-4ce1-b4f7-fc631e12f1bf node DatanodeRegistration(127.0.0.1:38357, datanodeUuid=59522bee-33a2-4a4f-bda7-39e4c321eb00, infoPort=38343, infoSecurePort=0, ipcPort=36941, storageInfo=lv=-57;cid=testClusterID;nsid=1419230798;c=1685718017731), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 15:00:18,116 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x412a00acad16307a: Processing first storage report for DS-afcc155d-7b22-4c4b-b282-2cb3dc57c2b5 from datanode 59522bee-33a2-4a4f-bda7-39e4c321eb00 2023-06-02 15:00:18,116 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x412a00acad16307a: from storage DS-afcc155d-7b22-4c4b-b282-2cb3dc57c2b5 node DatanodeRegistration(127.0.0.1:38357, datanodeUuid=59522bee-33a2-4a4f-bda7-39e4c321eb00, infoPort=38343, infoSecurePort=0, ipcPort=36941, storageInfo=lv=-57;cid=testClusterID;nsid=1419230798;c=1685718017731), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 15:00:18,138 INFO [Listener at localhost/36941] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33979 2023-06-02 15:00:18,144 WARN [Listener at localhost/36281] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 15:00:18,236 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x329e012d1dc4aad0: Processing first storage report for DS-dbb74b93-f08d-47ea-8807-48b4feda1ebf from datanode 7be8d142-059f-4428-8946-e727ab229074 2023-06-02 15:00:18,236 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x329e012d1dc4aad0: from storage DS-dbb74b93-f08d-47ea-8807-48b4feda1ebf node DatanodeRegistration(127.0.0.1:44647, datanodeUuid=7be8d142-059f-4428-8946-e727ab229074, infoPort=41021, infoSecurePort=0, ipcPort=36281, storageInfo=lv=-57;cid=testClusterID;nsid=1419230798;c=1685718017731), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 15:00:18,236 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x329e012d1dc4aad0: Processing first storage report for DS-3bc60888-85a3-4749-8ab9-945a9d27ec07 from datanode 7be8d142-059f-4428-8946-e727ab229074 2023-06-02 15:00:18,236 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x329e012d1dc4aad0: from storage DS-3bc60888-85a3-4749-8ab9-945a9d27ec07 node DatanodeRegistration(127.0.0.1:44647, datanodeUuid=7be8d142-059f-4428-8946-e727ab229074, infoPort=41021, infoSecurePort=0, ipcPort=36281, storageInfo=lv=-57;cid=testClusterID;nsid=1419230798;c=1685718017731), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 15:00:18,251 DEBUG [Listener at localhost/36281] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81 2023-06-02 15:00:18,253 INFO [Listener at localhost/36281] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/cluster_1f14afc9-391e-9c70-7f14-bad7c89105c7/zookeeper_0, clientPort=58021, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/cluster_1f14afc9-391e-9c70-7f14-bad7c89105c7/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/cluster_1f14afc9-391e-9c70-7f14-bad7c89105c7/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-02 15:00:18,254 INFO [Listener at localhost/36281] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=58021 2023-06-02 15:00:18,254 INFO [Listener at localhost/36281] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 15:00:18,255 INFO [Listener at localhost/36281] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 15:00:18,266 INFO [Listener at localhost/36281] util.FSUtils(471): Created version file at hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820 with version=8 2023-06-02 15:00:18,267 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/hbase-staging 2023-06-02 15:00:18,268 INFO [Listener at localhost/36281] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-06-02 15:00:18,268 INFO [Listener at localhost/36281] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 15:00:18,268 INFO [Listener at localhost/36281] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-02 15:00:18,268 INFO [Listener at localhost/36281] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-02 15:00:18,268 INFO [Listener at localhost/36281] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 15:00:18,269 INFO [Listener at localhost/36281] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-02 15:00:18,269 INFO [Listener at localhost/36281] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-02 15:00:18,270 INFO [Listener at localhost/36281] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44995 2023-06-02 15:00:18,270 INFO [Listener at localhost/36281] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 15:00:18,271 INFO [Listener at localhost/36281] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 15:00:18,272 INFO [Listener at localhost/36281] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44995 connecting to ZooKeeper ensemble=127.0.0.1:58021 2023-06-02 15:00:18,278 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:449950x0, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-02 15:00:18,279 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44995-0x1008c0e2bd00000 connected 2023-06-02 15:00:18,293 DEBUG [Listener at localhost/36281] zookeeper.ZKUtil(164): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-02 15:00:18,293 DEBUG [Listener at localhost/36281] zookeeper.ZKUtil(164): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 15:00:18,293 DEBUG [Listener at localhost/36281] zookeeper.ZKUtil(164): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-02 15:00:18,294 DEBUG [Listener at localhost/36281] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44995 2023-06-02 15:00:18,294 DEBUG [Listener at localhost/36281] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44995 2023-06-02 15:00:18,294 DEBUG [Listener at localhost/36281] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44995 2023-06-02 15:00:18,294 DEBUG [Listener at localhost/36281] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44995 2023-06-02 15:00:18,294 DEBUG [Listener at localhost/36281] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44995 2023-06-02 15:00:18,295 INFO [Listener at localhost/36281] master.HMaster(444): hbase.rootdir=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820, hbase.cluster.distributed=false 2023-06-02 15:00:18,307 INFO [Listener at localhost/36281] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-06-02 15:00:18,307 INFO [Listener at localhost/36281] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 15:00:18,307 INFO [Listener at localhost/36281] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-02 15:00:18,307 INFO [Listener at localhost/36281] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-02 15:00:18,307 INFO [Listener at localhost/36281] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 15:00:18,307 INFO [Listener at localhost/36281] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-02 15:00:18,307 INFO [Listener at localhost/36281] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-02 15:00:18,308 INFO [Listener at localhost/36281] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33031 2023-06-02 15:00:18,309 INFO [Listener at localhost/36281] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-02 15:00:18,310 DEBUG [Listener at localhost/36281] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-02 15:00:18,310 INFO [Listener at localhost/36281] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 15:00:18,311 INFO [Listener at localhost/36281] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 15:00:18,312 INFO [Listener at localhost/36281] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33031 connecting to ZooKeeper ensemble=127.0.0.1:58021 2023-06-02 15:00:18,315 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): regionserver:330310x0, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-02 15:00:18,316 DEBUG [Listener at localhost/36281] zookeeper.ZKUtil(164): regionserver:330310x0, quorum=127.0.0.1:58021, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-02 15:00:18,316 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33031-0x1008c0e2bd00001 connected 2023-06-02 15:00:18,316 DEBUG [Listener at localhost/36281] zookeeper.ZKUtil(164): regionserver:33031-0x1008c0e2bd00001, quorum=127.0.0.1:58021, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 15:00:18,317 DEBUG [Listener at localhost/36281] zookeeper.ZKUtil(164): regionserver:33031-0x1008c0e2bd00001, quorum=127.0.0.1:58021, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-02 15:00:18,317 DEBUG [Listener at localhost/36281] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33031 2023-06-02 15:00:18,317 DEBUG [Listener at localhost/36281] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33031 2023-06-02 15:00:18,318 DEBUG [Listener at localhost/36281] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33031 2023-06-02 15:00:18,318 DEBUG [Listener at localhost/36281] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33031 2023-06-02 15:00:18,318 DEBUG [Listener at localhost/36281] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33031 2023-06-02 15:00:18,319 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,44995,1685718018268 2023-06-02 15:00:18,322 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-02 15:00:18,322 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,44995,1685718018268 2023-06-02 15:00:18,323 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-02 15:00:18,323 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): regionserver:33031-0x1008c0e2bd00001, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-02 15:00:18,323 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 15:00:18,324 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-02 15:00:18,325 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,44995,1685718018268 from backup master directory 2023-06-02 15:00:18,325 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-02 15:00:18,326 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,44995,1685718018268 2023-06-02 15:00:18,327 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-02 15:00:18,327 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-02 15:00:18,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,44995,1685718018268 2023-06-02 15:00:18,338 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/hbase.id with ID: 7f2c572c-d8a3-4223-991d-8e0fc3d1e19d 2023-06-02 15:00:18,347 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 15:00:18,349 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 15:00:18,357 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x47f9c329 to 127.0.0.1:58021 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-02 15:00:18,362 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@279ffe43, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-02 15:00:18,362 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-02 15:00:18,362 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-02 15:00:18,363 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-02 15:00:18,364 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/MasterData/data/master/store-tmp 2023-06-02 15:00:18,371 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 15:00:18,371 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-02 15:00:18,371 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 15:00:18,371 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 15:00:18,371 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-02 15:00:18,371 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 15:00:18,371 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 15:00:18,371 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-02 15:00:18,371 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/MasterData/WALs/jenkins-hbase4.apache.org,44995,1685718018268 2023-06-02 15:00:18,374 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44995%2C1685718018268, suffix=, logDir=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/MasterData/WALs/jenkins-hbase4.apache.org,44995,1685718018268, archiveDir=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/MasterData/oldWALs, maxLogs=10 2023-06-02 15:00:18,378 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/MasterData/WALs/jenkins-hbase4.apache.org,44995,1685718018268/jenkins-hbase4.apache.org%2C44995%2C1685718018268.1685718018374 2023-06-02 15:00:18,378 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38357,DS-3771fdbc-37b4-4ce1-b4f7-fc631e12f1bf,DISK], DatanodeInfoWithStorage[127.0.0.1:44647,DS-dbb74b93-f08d-47ea-8807-48b4feda1ebf,DISK]] 2023-06-02 15:00:18,379 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-02 15:00:18,379 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 15:00:18,379 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-02 15:00:18,379 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-02 15:00:18,380 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-02 15:00:18,442 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-02 15:00:18,444 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-02 15:00:18,445 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 15:00:18,446 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-02 15:00:18,446 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-02 15:00:18,449 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-02 15:00:18,453 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-02 15:00:18,454 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=792787, jitterRate=0.00808165967464447}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-02 15:00:18,454 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-02 15:00:18,454 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-02 15:00:18,455 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-02 15:00:18,455 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-02 15:00:18,455 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-02 15:00:18,455 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-02 15:00:18,456 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-02 15:00:18,456 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-02 15:00:18,456 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-02 15:00:18,457 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-02 15:00:18,468 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-02 15:00:18,468 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-02 15:00:18,468 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-02 15:00:18,468 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-02 15:00:18,469 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-02 15:00:18,471 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 15:00:18,471 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-02 15:00:18,471 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-02 15:00:18,472 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-02 15:00:18,473 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-02 15:00:18,473 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): regionserver:33031-0x1008c0e2bd00001, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-02 15:00:18,473 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 15:00:18,473 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,44995,1685718018268, sessionid=0x1008c0e2bd00000, setting cluster-up flag (Was=false) 2023-06-02 15:00:18,478 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 15:00:18,481 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-02 15:00:18,482 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44995,1685718018268 2023-06-02 15:00:18,489 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 15:00:18,495 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-02 15:00:18,496 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44995,1685718018268 2023-06-02 15:00:18,496 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/.hbase-snapshot/.tmp 2023-06-02 15:00:18,498 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-02 15:00:18,499 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-02 15:00:18,499 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-02 15:00:18,499 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-02 15:00:18,499 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-02 15:00:18,499 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-06-02 15:00:18,499 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 15:00:18,499 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-02 15:00:18,499 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 15:00:18,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685718048500 2023-06-02 15:00:18,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-02 15:00:18,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-02 15:00:18,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-02 15:00:18,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-02 15:00:18,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-02 15:00:18,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-02 15:00:18,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-02 15:00:18,501 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-02 15:00:18,501 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-02 15:00:18,501 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-02 15:00:18,501 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-02 15:00:18,501 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-02 15:00:18,501 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-02 15:00:18,501 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-02 15:00:18,502 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685718018501,5,FailOnTimeoutGroup] 2023-06-02 15:00:18,502 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685718018502,5,FailOnTimeoutGroup] 2023-06-02 15:00:18,502 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-02 15:00:18,502 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-02 15:00:18,502 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-02 15:00:18,502 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-02 15:00:18,503 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-02 15:00:18,515 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-02 15:00:18,515 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-02 15:00:18,515 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820 2023-06-02 15:00:18,522 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 15:00:18,523 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-02 15:00:18,524 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740/info 2023-06-02 15:00:18,525 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-02 15:00:18,525 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 15:00:18,525 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-02 15:00:18,526 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740/rep_barrier 2023-06-02 15:00:18,527 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-02 15:00:18,527 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 15:00:18,527 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-02 15:00:18,528 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740/table 2023-06-02 15:00:18,528 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-02 15:00:18,529 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 15:00:18,530 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740 2023-06-02 15:00:18,530 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740 2023-06-02 15:00:18,532 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-02 15:00:18,532 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-02 15:00:18,534 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-02 15:00:18,535 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=852385, jitterRate=0.0838639885187149}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-02 15:00:18,535 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-02 15:00:18,535 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-02 15:00:18,535 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-02 15:00:18,535 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-02 15:00:18,535 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-02 15:00:18,535 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-02 15:00:18,536 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-02 15:00:18,536 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-02 15:00:18,537 INFO [RS:0;jenkins-hbase4:33031] regionserver.HRegionServer(951): ClusterId : 7f2c572c-d8a3-4223-991d-8e0fc3d1e19d 2023-06-02 15:00:18,537 DEBUG [RS:0;jenkins-hbase4:33031] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-02 15:00:18,537 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-02 15:00:18,537 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-02 15:00:18,538 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-02 15:00:18,539 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-02 15:00:18,540 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-02 15:00:18,540 DEBUG [RS:0;jenkins-hbase4:33031] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-02 15:00:18,540 DEBUG [RS:0;jenkins-hbase4:33031] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-02 15:00:18,543 DEBUG [RS:0;jenkins-hbase4:33031] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-02 15:00:18,543 DEBUG [RS:0;jenkins-hbase4:33031] zookeeper.ReadOnlyZKClient(139): Connect 0x10815870 to 127.0.0.1:58021 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-02 15:00:18,547 DEBUG [RS:0;jenkins-hbase4:33031] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1e46a410, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-02 15:00:18,547 DEBUG [RS:0;jenkins-hbase4:33031] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1a793300, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-02 15:00:18,556 DEBUG [RS:0;jenkins-hbase4:33031] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:33031 2023-06-02 15:00:18,556 INFO [RS:0;jenkins-hbase4:33031] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-02 15:00:18,556 INFO [RS:0;jenkins-hbase4:33031] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-02 15:00:18,556 DEBUG [RS:0;jenkins-hbase4:33031] regionserver.HRegionServer(1022): About to register with Master. 2023-06-02 15:00:18,556 INFO [RS:0;jenkins-hbase4:33031] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,44995,1685718018268 with isa=jenkins-hbase4.apache.org/172.31.14.131:33031, startcode=1685718018307 2023-06-02 15:00:18,556 DEBUG [RS:0;jenkins-hbase4:33031] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-02 15:00:18,559 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51941, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-06-02 15:00:18,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44995] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:00:18,561 DEBUG [RS:0;jenkins-hbase4:33031] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820 2023-06-02 15:00:18,561 DEBUG [RS:0;jenkins-hbase4:33031] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45467 2023-06-02 15:00:18,561 DEBUG [RS:0;jenkins-hbase4:33031] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-02 15:00:18,563 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-02 15:00:18,563 DEBUG [RS:0;jenkins-hbase4:33031] zookeeper.ZKUtil(162): regionserver:33031-0x1008c0e2bd00001, quorum=127.0.0.1:58021, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:00:18,564 WARN [RS:0;jenkins-hbase4:33031] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-02 15:00:18,564 INFO [RS:0;jenkins-hbase4:33031] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-02 15:00:18,564 DEBUG [RS:0;jenkins-hbase4:33031] regionserver.HRegionServer(1946): logDir=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/WALs/jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:00:18,564 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33031,1685718018307] 2023-06-02 15:00:18,567 DEBUG [RS:0;jenkins-hbase4:33031] zookeeper.ZKUtil(162): regionserver:33031-0x1008c0e2bd00001, quorum=127.0.0.1:58021, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:00:18,568 DEBUG [RS:0;jenkins-hbase4:33031] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-02 15:00:18,568 INFO [RS:0;jenkins-hbase4:33031] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-02 15:00:18,569 INFO [RS:0;jenkins-hbase4:33031] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-02 15:00:18,570 INFO [RS:0;jenkins-hbase4:33031] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-02 15:00:18,570 INFO [RS:0;jenkins-hbase4:33031] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-02 15:00:18,570 INFO [RS:0;jenkins-hbase4:33031] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-02 15:00:18,571 INFO [RS:0;jenkins-hbase4:33031] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-02 15:00:18,571 DEBUG [RS:0;jenkins-hbase4:33031] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 15:00:18,571 DEBUG [RS:0;jenkins-hbase4:33031] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 15:00:18,572 DEBUG [RS:0;jenkins-hbase4:33031] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 15:00:18,572 DEBUG [RS:0;jenkins-hbase4:33031] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 15:00:18,572 DEBUG [RS:0;jenkins-hbase4:33031] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 15:00:18,572 DEBUG [RS:0;jenkins-hbase4:33031] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-02 15:00:18,572 DEBUG [RS:0;jenkins-hbase4:33031] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 15:00:18,572 DEBUG [RS:0;jenkins-hbase4:33031] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 15:00:18,572 DEBUG [RS:0;jenkins-hbase4:33031] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 15:00:18,572 DEBUG [RS:0;jenkins-hbase4:33031] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 15:00:18,572 INFO [RS:0;jenkins-hbase4:33031] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-02 15:00:18,572 INFO [RS:0;jenkins-hbase4:33031] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-02 15:00:18,572 INFO [RS:0;jenkins-hbase4:33031] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-02 15:00:18,583 INFO [RS:0;jenkins-hbase4:33031] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-02 15:00:18,583 INFO [RS:0;jenkins-hbase4:33031] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33031,1685718018307-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-02 15:00:18,593 INFO [RS:0;jenkins-hbase4:33031] regionserver.Replication(203): jenkins-hbase4.apache.org,33031,1685718018307 started 2023-06-02 15:00:18,593 INFO [RS:0;jenkins-hbase4:33031] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33031,1685718018307, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33031, sessionid=0x1008c0e2bd00001 2023-06-02 15:00:18,593 DEBUG [RS:0;jenkins-hbase4:33031] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-02 15:00:18,593 DEBUG [RS:0;jenkins-hbase4:33031] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:00:18,593 DEBUG [RS:0;jenkins-hbase4:33031] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33031,1685718018307' 2023-06-02 15:00:18,593 DEBUG [RS:0;jenkins-hbase4:33031] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-02 15:00:18,594 DEBUG [RS:0;jenkins-hbase4:33031] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-02 15:00:18,594 DEBUG [RS:0;jenkins-hbase4:33031] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-02 15:00:18,594 DEBUG [RS:0;jenkins-hbase4:33031] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-02 15:00:18,594 DEBUG [RS:0;jenkins-hbase4:33031] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:00:18,594 DEBUG [RS:0;jenkins-hbase4:33031] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33031,1685718018307' 2023-06-02 15:00:18,594 DEBUG [RS:0;jenkins-hbase4:33031] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-02 15:00:18,594 DEBUG [RS:0;jenkins-hbase4:33031] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-02 15:00:18,594 DEBUG [RS:0;jenkins-hbase4:33031] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-02 15:00:18,594 INFO [RS:0;jenkins-hbase4:33031] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-02 15:00:18,595 INFO [RS:0;jenkins-hbase4:33031] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-02 15:00:18,690 DEBUG [jenkins-hbase4:44995] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-02 15:00:18,691 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33031,1685718018307, state=OPENING 2023-06-02 15:00:18,693 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-02 15:00:18,694 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 15:00:18,694 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-02 15:00:18,694 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33031,1685718018307}] 2023-06-02 15:00:18,697 INFO [RS:0;jenkins-hbase4:33031] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33031%2C1685718018307, suffix=, logDir=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/WALs/jenkins-hbase4.apache.org,33031,1685718018307, archiveDir=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/oldWALs, maxLogs=32 2023-06-02 15:00:18,706 INFO [RS:0;jenkins-hbase4:33031] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/WALs/jenkins-hbase4.apache.org,33031,1685718018307/jenkins-hbase4.apache.org%2C33031%2C1685718018307.1685718018698 2023-06-02 15:00:18,706 DEBUG [RS:0;jenkins-hbase4:33031] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38357,DS-3771fdbc-37b4-4ce1-b4f7-fc631e12f1bf,DISK], DatanodeInfoWithStorage[127.0.0.1:44647,DS-dbb74b93-f08d-47ea-8807-48b4feda1ebf,DISK]] 2023-06-02 15:00:18,848 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:00:18,849 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-02 15:00:18,852 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48122, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-02 15:00:18,855 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-02 15:00:18,856 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-02 15:00:18,859 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33031%2C1685718018307.meta, suffix=.meta, logDir=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/WALs/jenkins-hbase4.apache.org,33031,1685718018307, archiveDir=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/oldWALs, maxLogs=32 2023-06-02 15:00:18,875 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/WALs/jenkins-hbase4.apache.org,33031,1685718018307/jenkins-hbase4.apache.org%2C33031%2C1685718018307.meta.1685718018859.meta 2023-06-02 15:00:18,875 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44647,DS-dbb74b93-f08d-47ea-8807-48b4feda1ebf,DISK], DatanodeInfoWithStorage[127.0.0.1:38357,DS-3771fdbc-37b4-4ce1-b4f7-fc631e12f1bf,DISK]] 2023-06-02 15:00:18,875 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-02 15:00:18,876 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-02 15:00:18,876 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-02 15:00:18,876 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-02 15:00:18,876 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-02 15:00:18,876 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 15:00:18,876 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-02 15:00:18,876 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-02 15:00:18,879 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-02 15:00:18,880 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740/info 2023-06-02 15:00:18,880 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740/info 2023-06-02 15:00:18,880 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-02 15:00:18,881 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 15:00:18,881 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-02 15:00:18,882 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740/rep_barrier 2023-06-02 15:00:18,882 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740/rep_barrier 2023-06-02 15:00:18,882 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-02 15:00:18,883 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 15:00:18,883 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-02 15:00:18,884 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740/table 2023-06-02 15:00:18,884 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740/table 2023-06-02 15:00:18,884 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-02 15:00:18,885 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 15:00:18,886 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740 2023-06-02 15:00:18,887 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740 2023-06-02 15:00:18,890 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-02 15:00:18,891 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-02 15:00:18,894 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=807019, jitterRate=0.02617865800857544}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-02 15:00:18,894 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-02 15:00:18,896 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685718018848 2023-06-02 15:00:18,901 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-02 15:00:18,901 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-02 15:00:18,902 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33031,1685718018307, state=OPEN 2023-06-02 15:00:18,904 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-02 15:00:18,905 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-02 15:00:18,908 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-02 15:00:18,908 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33031,1685718018307 in 210 msec 2023-06-02 15:00:18,911 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-02 15:00:18,911 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 370 msec 2023-06-02 15:00:18,914 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 415 msec 2023-06-02 15:00:18,914 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685718018914, completionTime=-1 2023-06-02 15:00:18,915 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-02 15:00:18,915 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-02 15:00:18,938 DEBUG [hconnection-0x667948f8-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-02 15:00:18,941 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48128, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-02 15:00:18,943 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-02 15:00:18,943 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685718078943 2023-06-02 15:00:18,943 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685718138943 2023-06-02 15:00:18,943 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 28 msec 2023-06-02 15:00:18,949 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44995,1685718018268-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-02 15:00:18,949 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44995,1685718018268-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-02 15:00:18,949 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44995,1685718018268-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-02 15:00:18,949 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:44995, period=300000, unit=MILLISECONDS is enabled. 2023-06-02 15:00:18,950 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-02 15:00:18,950 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-02 15:00:18,950 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-02 15:00:18,951 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-02 15:00:18,951 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-02 15:00:18,953 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-02 15:00:18,954 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-02 15:00:18,955 DEBUG [HFileArchiver-9] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/.tmp/data/hbase/namespace/e02dc5cf8c3d540f6f90d3da5ac78586 2023-06-02 15:00:18,956 DEBUG [HFileArchiver-9] backup.HFileArchiver(153): Directory hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/.tmp/data/hbase/namespace/e02dc5cf8c3d540f6f90d3da5ac78586 empty. 2023-06-02 15:00:18,956 DEBUG [HFileArchiver-9] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/.tmp/data/hbase/namespace/e02dc5cf8c3d540f6f90d3da5ac78586 2023-06-02 15:00:18,956 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-02 15:00:18,965 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-02 15:00:18,966 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => e02dc5cf8c3d540f6f90d3da5ac78586, NAME => 'hbase:namespace,,1685718018950.e02dc5cf8c3d540f6f90d3da5ac78586.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/.tmp 2023-06-02 15:00:18,973 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685718018950.e02dc5cf8c3d540f6f90d3da5ac78586.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 15:00:18,973 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing e02dc5cf8c3d540f6f90d3da5ac78586, disabling compactions & flushes 2023-06-02 15:00:18,973 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685718018950.e02dc5cf8c3d540f6f90d3da5ac78586. 2023-06-02 15:00:18,973 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685718018950.e02dc5cf8c3d540f6f90d3da5ac78586. 2023-06-02 15:00:18,973 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685718018950.e02dc5cf8c3d540f6f90d3da5ac78586. after waiting 0 ms 2023-06-02 15:00:18,973 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685718018950.e02dc5cf8c3d540f6f90d3da5ac78586. 2023-06-02 15:00:18,973 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685718018950.e02dc5cf8c3d540f6f90d3da5ac78586. 2023-06-02 15:00:18,973 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for e02dc5cf8c3d540f6f90d3da5ac78586: 2023-06-02 15:00:18,976 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-02 15:00:18,977 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685718018950.e02dc5cf8c3d540f6f90d3da5ac78586.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685718018976"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685718018976"}]},"ts":"1685718018976"} 2023-06-02 15:00:18,979 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-02 15:00:18,980 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-02 15:00:18,980 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685718018980"}]},"ts":"1685718018980"} 2023-06-02 15:00:18,981 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-02 15:00:18,987 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e02dc5cf8c3d540f6f90d3da5ac78586, ASSIGN}] 2023-06-02 15:00:18,989 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e02dc5cf8c3d540f6f90d3da5ac78586, ASSIGN 2023-06-02 15:00:18,990 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=e02dc5cf8c3d540f6f90d3da5ac78586, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33031,1685718018307; forceNewPlan=false, retain=false 2023-06-02 15:00:19,141 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=e02dc5cf8c3d540f6f90d3da5ac78586, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:00:19,141 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685718018950.e02dc5cf8c3d540f6f90d3da5ac78586.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685718019141"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685718019141"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685718019141"}]},"ts":"1685718019141"} 2023-06-02 15:00:19,145 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure e02dc5cf8c3d540f6f90d3da5ac78586, server=jenkins-hbase4.apache.org,33031,1685718018307}] 2023-06-02 15:00:19,301 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685718018950.e02dc5cf8c3d540f6f90d3da5ac78586. 2023-06-02 15:00:19,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e02dc5cf8c3d540f6f90d3da5ac78586, NAME => 'hbase:namespace,,1685718018950.e02dc5cf8c3d540f6f90d3da5ac78586.', STARTKEY => '', ENDKEY => ''} 2023-06-02 15:00:19,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace e02dc5cf8c3d540f6f90d3da5ac78586 2023-06-02 15:00:19,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685718018950.e02dc5cf8c3d540f6f90d3da5ac78586.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 15:00:19,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e02dc5cf8c3d540f6f90d3da5ac78586 2023-06-02 15:00:19,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e02dc5cf8c3d540f6f90d3da5ac78586 2023-06-02 15:00:19,303 INFO [StoreOpener-e02dc5cf8c3d540f6f90d3da5ac78586-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region e02dc5cf8c3d540f6f90d3da5ac78586 2023-06-02 15:00:19,305 DEBUG [StoreOpener-e02dc5cf8c3d540f6f90d3da5ac78586-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/namespace/e02dc5cf8c3d540f6f90d3da5ac78586/info 2023-06-02 15:00:19,305 DEBUG [StoreOpener-e02dc5cf8c3d540f6f90d3da5ac78586-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/namespace/e02dc5cf8c3d540f6f90d3da5ac78586/info 2023-06-02 15:00:19,305 INFO [StoreOpener-e02dc5cf8c3d540f6f90d3da5ac78586-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e02dc5cf8c3d540f6f90d3da5ac78586 columnFamilyName info 2023-06-02 15:00:19,305 INFO [StoreOpener-e02dc5cf8c3d540f6f90d3da5ac78586-1] regionserver.HStore(310): Store=e02dc5cf8c3d540f6f90d3da5ac78586/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 15:00:19,306 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/namespace/e02dc5cf8c3d540f6f90d3da5ac78586 2023-06-02 15:00:19,307 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/namespace/e02dc5cf8c3d540f6f90d3da5ac78586 2023-06-02 15:00:19,309 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e02dc5cf8c3d540f6f90d3da5ac78586 2023-06-02 15:00:19,310 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/namespace/e02dc5cf8c3d540f6f90d3da5ac78586/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-02 15:00:19,311 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e02dc5cf8c3d540f6f90d3da5ac78586; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=802136, jitterRate=0.019968688488006592}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-02 15:00:19,311 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e02dc5cf8c3d540f6f90d3da5ac78586: 2023-06-02 15:00:19,313 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685718018950.e02dc5cf8c3d540f6f90d3da5ac78586., pid=6, masterSystemTime=1685718019298 2023-06-02 15:00:19,315 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685718018950.e02dc5cf8c3d540f6f90d3da5ac78586. 2023-06-02 15:00:19,315 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685718018950.e02dc5cf8c3d540f6f90d3da5ac78586. 2023-06-02 15:00:19,315 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=e02dc5cf8c3d540f6f90d3da5ac78586, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:00:19,316 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685718018950.e02dc5cf8c3d540f6f90d3da5ac78586.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685718019315"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685718019315"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685718019315"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685718019315"}]},"ts":"1685718019315"} 2023-06-02 15:00:19,319 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-02 15:00:19,319 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure e02dc5cf8c3d540f6f90d3da5ac78586, server=jenkins-hbase4.apache.org,33031,1685718018307 in 172 msec 2023-06-02 15:00:19,322 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-02 15:00:19,322 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=e02dc5cf8c3d540f6f90d3da5ac78586, ASSIGN in 332 msec 2023-06-02 15:00:19,322 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-02 15:00:19,323 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685718019322"}]},"ts":"1685718019322"} 2023-06-02 15:00:19,324 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-02 15:00:19,326 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-02 15:00:19,328 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 376 msec 2023-06-02 15:00:19,353 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-02 15:00:19,355 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-02 15:00:19,355 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 15:00:19,359 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-02 15:00:19,367 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-02 15:00:19,371 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 11 msec 2023-06-02 15:00:19,381 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-02 15:00:19,387 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-02 15:00:19,390 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-06-02 15:00:19,397 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-02 15:00:19,398 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-02 15:00:19,399 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.072sec 2023-06-02 15:00:19,399 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-02 15:00:19,399 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-02 15:00:19,399 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-02 15:00:19,399 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44995,1685718018268-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-02 15:00:19,399 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44995,1685718018268-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-02 15:00:19,400 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-02 15:00:19,438 DEBUG [Listener at localhost/36281] zookeeper.ReadOnlyZKClient(139): Connect 0x29e2f867 to 127.0.0.1:58021 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-02 15:00:19,443 DEBUG [Listener at localhost/36281] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7ec0b3f9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-02 15:00:19,444 DEBUG [hconnection-0x53bd7cf6-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-02 15:00:19,447 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48144, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-02 15:00:19,448 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,44995,1685718018268 2023-06-02 15:00:19,448 INFO [Listener at localhost/36281] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 15:00:19,453 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-02 15:00:19,453 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 15:00:19,454 INFO [Listener at localhost/36281] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-02 15:00:19,455 DEBUG [Listener at localhost/36281] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-02 15:00:19,457 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50898, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-02 15:00:19,459 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44995] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-02 15:00:19,459 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44995] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-02 15:00:19,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44995] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-02 15:00:19,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44995] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRolling 2023-06-02 15:00:19,465 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-06-02 15:00:19,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44995] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRolling" procId is: 9 2023-06-02 15:00:19,466 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-02 15:00:19,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44995] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-02 15:00:19,468 DEBUG [HFileArchiver-10] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/.tmp/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:19,468 DEBUG [HFileArchiver-10] backup.HFileArchiver(153): Directory hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/.tmp/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6 empty. 2023-06-02 15:00:19,469 DEBUG [HFileArchiver-10] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/.tmp/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:19,469 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRolling regions 2023-06-02 15:00:19,478 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/.tmp/data/default/TestLogRolling-testLogRolling/.tabledesc/.tableinfo.0000000001 2023-06-02 15:00:19,479 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => cc944a5f55c4a238ecd076504cad31b6, NAME => 'TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/.tmp 2023-06-02 15:00:19,486 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 15:00:19,486 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1604): Closing cc944a5f55c4a238ecd076504cad31b6, disabling compactions & flushes 2023-06-02 15:00:19,486 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6. 2023-06-02 15:00:19,486 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6. 2023-06-02 15:00:19,486 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6. after waiting 0 ms 2023-06-02 15:00:19,486 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6. 2023-06-02 15:00:19,486 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6. 2023-06-02 15:00:19,486 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for cc944a5f55c4a238ecd076504cad31b6: 2023-06-02 15:00:19,488 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-06-02 15:00:19,489 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685718019489"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685718019489"}]},"ts":"1685718019489"} 2023-06-02 15:00:19,491 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-02 15:00:19,492 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-02 15:00:19,492 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685718019492"}]},"ts":"1685718019492"} 2023-06-02 15:00:19,493 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLING in hbase:meta 2023-06-02 15:00:19,497 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=cc944a5f55c4a238ecd076504cad31b6, ASSIGN}] 2023-06-02 15:00:19,498 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=cc944a5f55c4a238ecd076504cad31b6, ASSIGN 2023-06-02 15:00:19,499 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=cc944a5f55c4a238ecd076504cad31b6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33031,1685718018307; forceNewPlan=false, retain=false 2023-06-02 15:00:19,650 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=cc944a5f55c4a238ecd076504cad31b6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:00:19,650 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685718019650"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685718019650"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685718019650"}]},"ts":"1685718019650"} 2023-06-02 15:00:19,652 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure cc944a5f55c4a238ecd076504cad31b6, server=jenkins-hbase4.apache.org,33031,1685718018307}] 2023-06-02 15:00:19,808 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6. 2023-06-02 15:00:19,808 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cc944a5f55c4a238ecd076504cad31b6, NAME => 'TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6.', STARTKEY => '', ENDKEY => ''} 2023-06-02 15:00:19,808 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:19,809 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 15:00:19,809 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:19,809 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:19,810 INFO [StoreOpener-cc944a5f55c4a238ecd076504cad31b6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:19,811 DEBUG [StoreOpener-cc944a5f55c4a238ecd076504cad31b6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info 2023-06-02 15:00:19,811 DEBUG [StoreOpener-cc944a5f55c4a238ecd076504cad31b6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info 2023-06-02 15:00:19,812 INFO [StoreOpener-cc944a5f55c4a238ecd076504cad31b6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cc944a5f55c4a238ecd076504cad31b6 columnFamilyName info 2023-06-02 15:00:19,812 INFO [StoreOpener-cc944a5f55c4a238ecd076504cad31b6-1] regionserver.HStore(310): Store=cc944a5f55c4a238ecd076504cad31b6/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 15:00:19,813 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:19,813 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:19,815 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:19,817 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-02 15:00:19,818 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cc944a5f55c4a238ecd076504cad31b6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=855108, jitterRate=0.0873260647058487}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-02 15:00:19,818 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cc944a5f55c4a238ecd076504cad31b6: 2023-06-02 15:00:19,819 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6., pid=11, masterSystemTime=1685718019805 2023-06-02 15:00:19,820 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6. 2023-06-02 15:00:19,821 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6. 2023-06-02 15:00:19,821 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=cc944a5f55c4a238ecd076504cad31b6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:00:19,821 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685718019821"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685718019821"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685718019821"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685718019821"}]},"ts":"1685718019821"} 2023-06-02 15:00:19,825 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-02 15:00:19,825 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure cc944a5f55c4a238ecd076504cad31b6, server=jenkins-hbase4.apache.org,33031,1685718018307 in 171 msec 2023-06-02 15:00:19,827 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-02 15:00:19,827 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=cc944a5f55c4a238ecd076504cad31b6, ASSIGN in 329 msec 2023-06-02 15:00:19,828 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-02 15:00:19,828 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685718019828"}]},"ts":"1685718019828"} 2023-06-02 15:00:19,830 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLED in hbase:meta 2023-06-02 15:00:19,833 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-06-02 15:00:19,834 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRolling in 374 msec 2023-06-02 15:00:22,528 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-02 15:00:24,568 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-02 15:00:24,569 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-02 15:00:24,569 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRolling' 2023-06-02 15:00:29,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44995] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-02 15:00:29,468 INFO [Listener at localhost/36281] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRolling, procId: 9 completed 2023-06-02 15:00:29,470 DEBUG [Listener at localhost/36281] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRolling 2023-06-02 15:00:29,470 DEBUG [Listener at localhost/36281] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6. 2023-06-02 15:00:29,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] regionserver.HRegion(9158): Flush requested on cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:29,482 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing cc944a5f55c4a238ecd076504cad31b6 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-02 15:00:29,494 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/.tmp/info/e05b896a2ec64e2bb01558848a62043e 2023-06-02 15:00:29,503 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/.tmp/info/e05b896a2ec64e2bb01558848a62043e as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/e05b896a2ec64e2bb01558848a62043e 2023-06-02 15:00:29,509 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/e05b896a2ec64e2bb01558848a62043e, entries=7, sequenceid=11, filesize=12.1 K 2023-06-02 15:00:29,510 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=23.12 KB/23672 for cc944a5f55c4a238ecd076504cad31b6 in 28ms, sequenceid=11, compaction requested=false 2023-06-02 15:00:29,510 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for cc944a5f55c4a238ecd076504cad31b6: 2023-06-02 15:00:29,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] regionserver.HRegion(9158): Flush requested on cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:29,511 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing cc944a5f55c4a238ecd076504cad31b6 1/1 column families, dataSize=24.17 KB heapSize=26.13 KB 2023-06-02 15:00:29,522 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.17 KB at sequenceid=37 (bloomFilter=true), to=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/.tmp/info/1292501a3f6a413bb25c7716abcb4974 2023-06-02 15:00:29,528 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/.tmp/info/1292501a3f6a413bb25c7716abcb4974 as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/1292501a3f6a413bb25c7716abcb4974 2023-06-02 15:00:29,533 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/1292501a3f6a413bb25c7716abcb4974, entries=23, sequenceid=37, filesize=29.0 K 2023-06-02 15:00:29,534 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~24.17 KB/24748, heapSize ~26.11 KB/26736, currentSize=2.10 KB/2152 for cc944a5f55c4a238ecd076504cad31b6 in 23ms, sequenceid=37, compaction requested=false 2023-06-02 15:00:29,534 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for cc944a5f55c4a238ecd076504cad31b6: 2023-06-02 15:00:29,534 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=41.1 K, sizeToCheck=16.0 K 2023-06-02 15:00:29,534 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-02 15:00:29,534 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/1292501a3f6a413bb25c7716abcb4974 because midkey is the same as first or last row 2023-06-02 15:00:31,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] regionserver.HRegion(9158): Flush requested on cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:31,520 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing cc944a5f55c4a238ecd076504cad31b6 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-02 15:00:31,534 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=47 (bloomFilter=true), to=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/.tmp/info/91b29d30e50c488d9d96a73ea600518a 2023-06-02 15:00:31,540 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/.tmp/info/91b29d30e50c488d9d96a73ea600518a as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/91b29d30e50c488d9d96a73ea600518a 2023-06-02 15:00:31,546 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/91b29d30e50c488d9d96a73ea600518a, entries=7, sequenceid=47, filesize=12.1 K 2023-06-02 15:00:31,547 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=22.07 KB/22596 for cc944a5f55c4a238ecd076504cad31b6 in 27ms, sequenceid=47, compaction requested=true 2023-06-02 15:00:31,547 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for cc944a5f55c4a238ecd076504cad31b6: 2023-06-02 15:00:31,547 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=53.2 K, sizeToCheck=16.0 K 2023-06-02 15:00:31,547 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-02 15:00:31,547 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/1292501a3f6a413bb25c7716abcb4974 because midkey is the same as first or last row 2023-06-02 15:00:31,547 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-02 15:00:31,548 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-02 15:00:31,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] regionserver.HRegion(9158): Flush requested on cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:31,549 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing cc944a5f55c4a238ecd076504cad31b6 1/1 column families, dataSize=23.12 KB heapSize=25 KB 2023-06-02 15:00:31,549 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 54449 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-02 15:00:31,550 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1912): cc944a5f55c4a238ecd076504cad31b6/info is initiating minor compaction (all files) 2023-06-02 15:00:31,550 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of cc944a5f55c4a238ecd076504cad31b6/info in TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6. 2023-06-02 15:00:31,550 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/e05b896a2ec64e2bb01558848a62043e, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/1292501a3f6a413bb25c7716abcb4974, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/91b29d30e50c488d9d96a73ea600518a] into tmpdir=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/.tmp, totalSize=53.2 K 2023-06-02 15:00:31,551 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting e05b896a2ec64e2bb01558848a62043e, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1685718029473 2023-06-02 15:00:31,552 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting 1292501a3f6a413bb25c7716abcb4974, keycount=23, bloomtype=ROW, size=29.0 K, encoding=NONE, compression=NONE, seqNum=37, earliestPutTs=1685718029483 2023-06-02 15:00:31,553 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting 91b29d30e50c488d9d96a73ea600518a, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=47, earliestPutTs=1685718029512 2023-06-02 15:00:31,576 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=23.12 KB at sequenceid=72 (bloomFilter=true), to=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/.tmp/info/574cea7610d44e75b660c66ed07ce764 2023-06-02 15:00:31,579 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] throttle.PressureAwareThroughputController(145): cc944a5f55c4a238ecd076504cad31b6#info#compaction#29 average throughput is 37.97 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-02 15:00:31,583 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/.tmp/info/574cea7610d44e75b660c66ed07ce764 as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/574cea7610d44e75b660c66ed07ce764 2023-06-02 15:00:31,589 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/574cea7610d44e75b660c66ed07ce764, entries=22, sequenceid=72, filesize=27.9 K 2023-06-02 15:00:31,590 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~23.12 KB/23672, heapSize ~24.98 KB/25584, currentSize=5.25 KB/5380 for cc944a5f55c4a238ecd076504cad31b6 in 42ms, sequenceid=72, compaction requested=false 2023-06-02 15:00:31,590 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for cc944a5f55c4a238ecd076504cad31b6: 2023-06-02 15:00:31,590 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=81.1 K, sizeToCheck=16.0 K 2023-06-02 15:00:31,590 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-02 15:00:31,590 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/1292501a3f6a413bb25c7716abcb4974 because midkey is the same as first or last row 2023-06-02 15:00:31,599 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/.tmp/info/5338062a1dd543b283361f80812f66f8 as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/5338062a1dd543b283361f80812f66f8 2023-06-02 15:00:31,604 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in cc944a5f55c4a238ecd076504cad31b6/info of cc944a5f55c4a238ecd076504cad31b6 into 5338062a1dd543b283361f80812f66f8(size=43.8 K), total size for store is 71.7 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-02 15:00:31,604 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for cc944a5f55c4a238ecd076504cad31b6: 2023-06-02 15:00:31,604 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6., storeName=cc944a5f55c4a238ecd076504cad31b6/info, priority=13, startTime=1685718031547; duration=0sec 2023-06-02 15:00:31,605 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=71.7 K, sizeToCheck=16.0 K 2023-06-02 15:00:31,605 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-02 15:00:31,605 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/5338062a1dd543b283361f80812f66f8 because midkey is the same as first or last row 2023-06-02 15:00:31,605 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-02 15:00:33,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] regionserver.HRegion(9158): Flush requested on cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:33,564 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing cc944a5f55c4a238ecd076504cad31b6 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-02 15:00:33,576 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=83 (bloomFilter=true), to=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/.tmp/info/880a960cf02942a7b0e5173adeed34b3 2023-06-02 15:00:33,582 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/.tmp/info/880a960cf02942a7b0e5173adeed34b3 as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/880a960cf02942a7b0e5173adeed34b3 2023-06-02 15:00:33,587 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/880a960cf02942a7b0e5173adeed34b3, entries=7, sequenceid=83, filesize=12.1 K 2023-06-02 15:00:33,588 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=22.07 KB/22596 for cc944a5f55c4a238ecd076504cad31b6 in 25ms, sequenceid=83, compaction requested=true 2023-06-02 15:00:33,588 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for cc944a5f55c4a238ecd076504cad31b6: 2023-06-02 15:00:33,588 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=83.8 K, sizeToCheck=16.0 K 2023-06-02 15:00:33,588 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-02 15:00:33,588 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/5338062a1dd543b283361f80812f66f8 because midkey is the same as first or last row 2023-06-02 15:00:33,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] regionserver.HRegion(9158): Flush requested on cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:33,588 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-02 15:00:33,588 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-02 15:00:33,589 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing cc944a5f55c4a238ecd076504cad31b6 1/1 column families, dataSize=23.12 KB heapSize=25 KB 2023-06-02 15:00:33,590 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 85841 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-02 15:00:33,590 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1912): cc944a5f55c4a238ecd076504cad31b6/info is initiating minor compaction (all files) 2023-06-02 15:00:33,590 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of cc944a5f55c4a238ecd076504cad31b6/info in TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6. 2023-06-02 15:00:33,590 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/5338062a1dd543b283361f80812f66f8, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/574cea7610d44e75b660c66ed07ce764, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/880a960cf02942a7b0e5173adeed34b3] into tmpdir=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/.tmp, totalSize=83.8 K 2023-06-02 15:00:33,591 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting 5338062a1dd543b283361f80812f66f8, keycount=37, bloomtype=ROW, size=43.8 K, encoding=NONE, compression=NONE, seqNum=47, earliestPutTs=1685718029473 2023-06-02 15:00:33,591 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting 574cea7610d44e75b660c66ed07ce764, keycount=22, bloomtype=ROW, size=27.9 K, encoding=NONE, compression=NONE, seqNum=72, earliestPutTs=1685718031521 2023-06-02 15:00:33,592 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting 880a960cf02942a7b0e5173adeed34b3, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=83, earliestPutTs=1685718031549 2023-06-02 15:00:33,600 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=cc944a5f55c4a238ecd076504cad31b6, server=jenkins-hbase4.apache.org,33031,1685718018307 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-02 15:00:33,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] ipc.CallRunner(144): callId: 104 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:48144 deadline: 1685718043599, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=cc944a5f55c4a238ecd076504cad31b6, server=jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:00:33,601 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=23.12 KB at sequenceid=108 (bloomFilter=true), to=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/.tmp/info/2e006582159c4536b876597851bb23de 2023-06-02 15:00:33,606 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] throttle.PressureAwareThroughputController(145): cc944a5f55c4a238ecd076504cad31b6#info#compaction#32 average throughput is 33.86 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-02 15:00:33,606 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/.tmp/info/2e006582159c4536b876597851bb23de as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/2e006582159c4536b876597851bb23de 2023-06-02 15:00:33,615 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/2e006582159c4536b876597851bb23de, entries=22, sequenceid=108, filesize=27.9 K 2023-06-02 15:00:33,616 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~23.12 KB/23672, heapSize ~24.98 KB/25584, currentSize=7.36 KB/7532 for cc944a5f55c4a238ecd076504cad31b6 in 27ms, sequenceid=108, compaction requested=false 2023-06-02 15:00:33,616 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for cc944a5f55c4a238ecd076504cad31b6: 2023-06-02 15:00:33,616 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=111.7 K, sizeToCheck=16.0 K 2023-06-02 15:00:33,616 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-02 15:00:33,616 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/5338062a1dd543b283361f80812f66f8 because midkey is the same as first or last row 2023-06-02 15:00:33,620 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/.tmp/info/14af675034614ce1802aa5032830b247 as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/14af675034614ce1802aa5032830b247 2023-06-02 15:00:33,625 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in cc944a5f55c4a238ecd076504cad31b6/info of cc944a5f55c4a238ecd076504cad31b6 into 14af675034614ce1802aa5032830b247(size=74.6 K), total size for store is 102.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-02 15:00:33,625 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for cc944a5f55c4a238ecd076504cad31b6: 2023-06-02 15:00:33,625 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6., storeName=cc944a5f55c4a238ecd076504cad31b6/info, priority=13, startTime=1685718033588; duration=0sec 2023-06-02 15:00:33,625 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=102.5 K, sizeToCheck=16.0 K 2023-06-02 15:00:33,625 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-02 15:00:33,626 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.CompactSplit(227): Splitting TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6., compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-02 15:00:33,626 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-02 15:00:33,627 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44995] assignment.AssignmentManager(1140): Split request from jenkins-hbase4.apache.org,33031,1685718018307, parent={ENCODED => cc944a5f55c4a238ecd076504cad31b6, NAME => 'TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6.', STARTKEY => '', ENDKEY => ''} splitKey=row0062 2023-06-02 15:00:33,633 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44995] assignment.SplitTableRegionProcedure(219): Splittable=true state=OPEN, location=jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:00:33,639 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44995] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=cc944a5f55c4a238ecd076504cad31b6, daughterA=c5ae726efbadfef5d5da8ff18a640d59, daughterB=871c9ceafcb24ea9b4813ed477334690 2023-06-02 15:00:33,640 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=cc944a5f55c4a238ecd076504cad31b6, daughterA=c5ae726efbadfef5d5da8ff18a640d59, daughterB=871c9ceafcb24ea9b4813ed477334690 2023-06-02 15:00:33,640 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=cc944a5f55c4a238ecd076504cad31b6, daughterA=c5ae726efbadfef5d5da8ff18a640d59, daughterB=871c9ceafcb24ea9b4813ed477334690 2023-06-02 15:00:33,640 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=cc944a5f55c4a238ecd076504cad31b6, daughterA=c5ae726efbadfef5d5da8ff18a640d59, daughterB=871c9ceafcb24ea9b4813ed477334690 2023-06-02 15:00:33,648 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=cc944a5f55c4a238ecd076504cad31b6, UNASSIGN}] 2023-06-02 15:00:33,649 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=cc944a5f55c4a238ecd076504cad31b6, UNASSIGN 2023-06-02 15:00:33,650 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=cc944a5f55c4a238ecd076504cad31b6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:00:33,650 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685718033650"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685718033650"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685718033650"}]},"ts":"1685718033650"} 2023-06-02 15:00:33,652 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; CloseRegionProcedure cc944a5f55c4a238ecd076504cad31b6, server=jenkins-hbase4.apache.org,33031,1685718018307}] 2023-06-02 15:00:33,810 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:33,810 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cc944a5f55c4a238ecd076504cad31b6, disabling compactions & flushes 2023-06-02 15:00:33,810 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6. 2023-06-02 15:00:33,810 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6. 2023-06-02 15:00:33,810 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6. after waiting 0 ms 2023-06-02 15:00:33,810 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6. 2023-06-02 15:00:33,810 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing cc944a5f55c4a238ecd076504cad31b6 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-02 15:00:33,821 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=119 (bloomFilter=true), to=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/.tmp/info/f618aeb30b85463fac8535849397a59f 2023-06-02 15:00:33,826 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/.tmp/info/f618aeb30b85463fac8535849397a59f as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/f618aeb30b85463fac8535849397a59f 2023-06-02 15:00:33,831 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/f618aeb30b85463fac8535849397a59f, entries=7, sequenceid=119, filesize=12.1 K 2023-06-02 15:00:33,832 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for cc944a5f55c4a238ecd076504cad31b6 in 22ms, sequenceid=119, compaction requested=true 2023-06-02 15:00:33,837 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/e05b896a2ec64e2bb01558848a62043e, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/1292501a3f6a413bb25c7716abcb4974, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/5338062a1dd543b283361f80812f66f8, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/91b29d30e50c488d9d96a73ea600518a, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/574cea7610d44e75b660c66ed07ce764, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/880a960cf02942a7b0e5173adeed34b3] to archive 2023-06-02 15:00:33,838 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-02 15:00:33,839 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/e05b896a2ec64e2bb01558848a62043e to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/e05b896a2ec64e2bb01558848a62043e 2023-06-02 15:00:33,841 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/1292501a3f6a413bb25c7716abcb4974 to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/1292501a3f6a413bb25c7716abcb4974 2023-06-02 15:00:33,842 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/5338062a1dd543b283361f80812f66f8 to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/5338062a1dd543b283361f80812f66f8 2023-06-02 15:00:33,843 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/91b29d30e50c488d9d96a73ea600518a to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/91b29d30e50c488d9d96a73ea600518a 2023-06-02 15:00:33,844 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/574cea7610d44e75b660c66ed07ce764 to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/574cea7610d44e75b660c66ed07ce764 2023-06-02 15:00:33,845 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/880a960cf02942a7b0e5173adeed34b3 to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/880a960cf02942a7b0e5173adeed34b3 2023-06-02 15:00:33,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/recovered.edits/122.seqid, newMaxSeqId=122, maxSeqId=1 2023-06-02 15:00:33,851 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6. 2023-06-02 15:00:33,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cc944a5f55c4a238ecd076504cad31b6: 2023-06-02 15:00:33,853 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:33,853 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=cc944a5f55c4a238ecd076504cad31b6, regionState=CLOSED 2023-06-02 15:00:33,854 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685718033853"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685718033853"}]},"ts":"1685718033853"} 2023-06-02 15:00:33,857 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-06-02 15:00:33,857 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; CloseRegionProcedure cc944a5f55c4a238ecd076504cad31b6, server=jenkins-hbase4.apache.org,33031,1685718018307 in 203 msec 2023-06-02 15:00:33,859 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-06-02 15:00:33,859 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=cc944a5f55c4a238ecd076504cad31b6, UNASSIGN in 209 msec 2023-06-02 15:00:33,870 INFO [PEWorker-4] assignment.SplitTableRegionProcedure(694): pid=12 splitting 3 storefiles, region=cc944a5f55c4a238ecd076504cad31b6, threads=3 2023-06-02 15:00:33,872 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/14af675034614ce1802aa5032830b247 for region: cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:33,872 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/2e006582159c4536b876597851bb23de for region: cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:33,872 DEBUG [StoreFileSplitter-pool-2] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/f618aeb30b85463fac8535849397a59f for region: cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:33,880 DEBUG [StoreFileSplitter-pool-2] regionserver.HRegionFileSystem(700): Will create HFileLink file for hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/f618aeb30b85463fac8535849397a59f, top=true 2023-06-02 15:00:33,880 DEBUG [StoreFileSplitter-pool-1] regionserver.HRegionFileSystem(700): Will create HFileLink file for hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/2e006582159c4536b876597851bb23de, top=true 2023-06-02 15:00:33,885 INFO [StoreFileSplitter-pool-1] regionserver.HRegionFileSystem(742): Created linkFile:hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/.splits/871c9ceafcb24ea9b4813ed477334690/info/TestLogRolling-testLogRolling=cc944a5f55c4a238ecd076504cad31b6-2e006582159c4536b876597851bb23de for child: 871c9ceafcb24ea9b4813ed477334690, parent: cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:33,885 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/2e006582159c4536b876597851bb23de for region: cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:33,885 INFO [StoreFileSplitter-pool-2] regionserver.HRegionFileSystem(742): Created linkFile:hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/.splits/871c9ceafcb24ea9b4813ed477334690/info/TestLogRolling-testLogRolling=cc944a5f55c4a238ecd076504cad31b6-f618aeb30b85463fac8535849397a59f for child: 871c9ceafcb24ea9b4813ed477334690, parent: cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:33,885 DEBUG [StoreFileSplitter-pool-2] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/f618aeb30b85463fac8535849397a59f for region: cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:33,909 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/14af675034614ce1802aa5032830b247 for region: cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:00:33,909 DEBUG [PEWorker-4] assignment.SplitTableRegionProcedure(755): pid=12 split storefiles for region cc944a5f55c4a238ecd076504cad31b6 Daughter A: 1 storefiles, Daughter B: 3 storefiles. 2023-06-02 15:00:33,932 DEBUG [PEWorker-4] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/c5ae726efbadfef5d5da8ff18a640d59/recovered.edits/122.seqid, newMaxSeqId=122, maxSeqId=-1 2023-06-02 15:00:33,934 DEBUG [PEWorker-4] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/recovered.edits/122.seqid, newMaxSeqId=122, maxSeqId=-1 2023-06-02 15:00:33,936 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685718033936"},{"qualifier":"splitA","vlen":70,"tag":[],"timestamp":"1685718033936"},{"qualifier":"splitB","vlen":70,"tag":[],"timestamp":"1685718033936"}]},"ts":"1685718033936"} 2023-06-02 15:00:33,936 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685718033633.c5ae726efbadfef5d5da8ff18a640d59.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685718033936"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685718033936"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685718033936"}]},"ts":"1685718033936"} 2023-06-02 15:00:33,936 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685718033936"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685718033936"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685718033936"}]},"ts":"1685718033936"} 2023-06-02 15:00:33,976 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33031] regionserver.HRegion(9158): Flush requested on 1588230740 2023-06-02 15:00:33,976 DEBUG [MemStoreFlusher.0] regionserver.FlushAllLargeStoresPolicy(69): Since none of the CFs were above the size, flushing all. 2023-06-02 15:00:33,976 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.82 KB heapSize=8.36 KB 2023-06-02 15:00:33,985 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c5ae726efbadfef5d5da8ff18a640d59, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=871c9ceafcb24ea9b4813ed477334690, ASSIGN}] 2023-06-02 15:00:33,986 WARN [DataStreamer for file /user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740/.tmp/info/c4c7a6ad71f049d988c0f78a4faddec7] hdfs.DataStreamer(982): Caught exception java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1257) at java.lang.Thread.join(Thread.java:1331) at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:980) at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:630) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:807) 2023-06-02 15:00:33,986 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c5ae726efbadfef5d5da8ff18a640d59, ASSIGN 2023-06-02 15:00:33,986 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=871c9ceafcb24ea9b4813ed477334690, ASSIGN 2023-06-02 15:00:33,986 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.61 KB at sequenceid=17 (bloomFilter=false), to=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740/.tmp/info/c4c7a6ad71f049d988c0f78a4faddec7 2023-06-02 15:00:33,987 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c5ae726efbadfef5d5da8ff18a640d59, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase4.apache.org,33031,1685718018307; forceNewPlan=false, retain=false 2023-06-02 15:00:33,987 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=871c9ceafcb24ea9b4813ed477334690, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase4.apache.org,33031,1685718018307; forceNewPlan=false, retain=false 2023-06-02 15:00:33,998 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=216 B at sequenceid=17 (bloomFilter=false), to=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740/.tmp/table/90dbdf1174644840a80d84a33a41d61c 2023-06-02 15:00:34,003 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740/.tmp/info/c4c7a6ad71f049d988c0f78a4faddec7 as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740/info/c4c7a6ad71f049d988c0f78a4faddec7 2023-06-02 15:00:34,007 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740/info/c4c7a6ad71f049d988c0f78a4faddec7, entries=29, sequenceid=17, filesize=8.6 K 2023-06-02 15:00:34,008 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740/.tmp/table/90dbdf1174644840a80d84a33a41d61c as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740/table/90dbdf1174644840a80d84a33a41d61c 2023-06-02 15:00:34,012 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740/table/90dbdf1174644840a80d84a33a41d61c, entries=4, sequenceid=17, filesize=4.8 K 2023-06-02 15:00:34,013 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~4.82 KB/4934, heapSize ~8.08 KB/8272, currentSize=0 B/0 for 1588230740 in 37ms, sequenceid=17, compaction requested=false 2023-06-02 15:00:34,014 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-06-02 15:00:34,138 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=c5ae726efbadfef5d5da8ff18a640d59, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:00:34,138 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=871c9ceafcb24ea9b4813ed477334690, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:00:34,138 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685718033633.c5ae726efbadfef5d5da8ff18a640d59.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685718034138"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685718034138"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685718034138"}]},"ts":"1685718034138"} 2023-06-02 15:00:34,139 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685718034138"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685718034138"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685718034138"}]},"ts":"1685718034138"} 2023-06-02 15:00:34,140 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE; OpenRegionProcedure c5ae726efbadfef5d5da8ff18a640d59, server=jenkins-hbase4.apache.org,33031,1685718018307}] 2023-06-02 15:00:34,141 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=16, state=RUNNABLE; OpenRegionProcedure 871c9ceafcb24ea9b4813ed477334690, server=jenkins-hbase4.apache.org,33031,1685718018307}] 2023-06-02 15:00:34,295 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690. 2023-06-02 15:00:34,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 871c9ceafcb24ea9b4813ed477334690, NAME => 'TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.', STARTKEY => 'row0062', ENDKEY => ''} 2023-06-02 15:00:34,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 871c9ceafcb24ea9b4813ed477334690 2023-06-02 15:00:34,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 15:00:34,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 871c9ceafcb24ea9b4813ed477334690 2023-06-02 15:00:34,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 871c9ceafcb24ea9b4813ed477334690 2023-06-02 15:00:34,297 INFO [StoreOpener-871c9ceafcb24ea9b4813ed477334690-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 871c9ceafcb24ea9b4813ed477334690 2023-06-02 15:00:34,298 DEBUG [StoreOpener-871c9ceafcb24ea9b4813ed477334690-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info 2023-06-02 15:00:34,298 DEBUG [StoreOpener-871c9ceafcb24ea9b4813ed477334690-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info 2023-06-02 15:00:34,298 INFO [StoreOpener-871c9ceafcb24ea9b4813ed477334690-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 871c9ceafcb24ea9b4813ed477334690 columnFamilyName info 2023-06-02 15:00:34,308 DEBUG [StoreOpener-871c9ceafcb24ea9b4813ed477334690-1] regionserver.HStore(539): loaded hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/14af675034614ce1802aa5032830b247.cc944a5f55c4a238ecd076504cad31b6->hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/14af675034614ce1802aa5032830b247-top 2023-06-02 15:00:34,313 DEBUG [StoreOpener-871c9ceafcb24ea9b4813ed477334690-1] regionserver.HStore(539): loaded hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/TestLogRolling-testLogRolling=cc944a5f55c4a238ecd076504cad31b6-2e006582159c4536b876597851bb23de 2023-06-02 15:00:34,317 DEBUG [StoreOpener-871c9ceafcb24ea9b4813ed477334690-1] regionserver.HStore(539): loaded hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/TestLogRolling-testLogRolling=cc944a5f55c4a238ecd076504cad31b6-f618aeb30b85463fac8535849397a59f 2023-06-02 15:00:34,317 INFO [StoreOpener-871c9ceafcb24ea9b4813ed477334690-1] regionserver.HStore(310): Store=871c9ceafcb24ea9b4813ed477334690/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 15:00:34,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690 2023-06-02 15:00:34,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690 2023-06-02 15:00:34,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 871c9ceafcb24ea9b4813ed477334690 2023-06-02 15:00:34,322 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 871c9ceafcb24ea9b4813ed477334690; next sequenceid=123; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=834529, jitterRate=0.061158791184425354}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-02 15:00:34,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 871c9ceafcb24ea9b4813ed477334690: 2023-06-02 15:00:34,323 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690., pid=18, masterSystemTime=1685718034292 2023-06-02 15:00:34,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-02 15:00:34,324 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-02 15:00:34,325 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690. 2023-06-02 15:00:34,325 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1912): 871c9ceafcb24ea9b4813ed477334690/info is initiating minor compaction (all files) 2023-06-02 15:00:34,325 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 871c9ceafcb24ea9b4813ed477334690/info in TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690. 2023-06-02 15:00:34,325 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/14af675034614ce1802aa5032830b247.cc944a5f55c4a238ecd076504cad31b6->hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/14af675034614ce1802aa5032830b247-top, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/TestLogRolling-testLogRolling=cc944a5f55c4a238ecd076504cad31b6-2e006582159c4536b876597851bb23de, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/TestLogRolling-testLogRolling=cc944a5f55c4a238ecd076504cad31b6-f618aeb30b85463fac8535849397a59f] into tmpdir=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp, totalSize=114.6 K 2023-06-02 15:00:34,326 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690. 2023-06-02 15:00:34,326 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690. 2023-06-02 15:00:34,326 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1685718033633.c5ae726efbadfef5d5da8ff18a640d59. 2023-06-02 15:00:34,326 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c5ae726efbadfef5d5da8ff18a640d59, NAME => 'TestLogRolling-testLogRolling,,1685718033633.c5ae726efbadfef5d5da8ff18a640d59.', STARTKEY => '', ENDKEY => 'row0062'} 2023-06-02 15:00:34,326 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting 14af675034614ce1802aa5032830b247.cc944a5f55c4a238ecd076504cad31b6, keycount=33, bloomtype=ROW, size=74.6 K, encoding=NONE, compression=NONE, seqNum=84, earliestPutTs=1685718029473 2023-06-02 15:00:34,326 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling c5ae726efbadfef5d5da8ff18a640d59 2023-06-02 15:00:34,326 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685718033633.c5ae726efbadfef5d5da8ff18a640d59.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 15:00:34,326 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=871c9ceafcb24ea9b4813ed477334690, regionState=OPEN, openSeqNum=123, regionLocation=jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:00:34,326 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c5ae726efbadfef5d5da8ff18a640d59 2023-06-02 15:00:34,326 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c5ae726efbadfef5d5da8ff18a640d59 2023-06-02 15:00:34,326 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting TestLogRolling-testLogRolling=cc944a5f55c4a238ecd076504cad31b6-2e006582159c4536b876597851bb23de, keycount=22, bloomtype=ROW, size=27.9 K, encoding=NONE, compression=NONE, seqNum=108, earliestPutTs=1685718033564 2023-06-02 15:00:34,326 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685718034326"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685718034326"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685718034326"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685718034326"}]},"ts":"1685718034326"} 2023-06-02 15:00:34,327 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting TestLogRolling-testLogRolling=cc944a5f55c4a238ecd076504cad31b6-f618aeb30b85463fac8535849397a59f, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=119, earliestPutTs=1685718033589 2023-06-02 15:00:34,328 INFO [StoreOpener-c5ae726efbadfef5d5da8ff18a640d59-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c5ae726efbadfef5d5da8ff18a640d59 2023-06-02 15:00:34,328 DEBUG [StoreOpener-c5ae726efbadfef5d5da8ff18a640d59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/c5ae726efbadfef5d5da8ff18a640d59/info 2023-06-02 15:00:34,329 DEBUG [StoreOpener-c5ae726efbadfef5d5da8ff18a640d59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/c5ae726efbadfef5d5da8ff18a640d59/info 2023-06-02 15:00:34,329 INFO [StoreOpener-c5ae726efbadfef5d5da8ff18a640d59-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c5ae726efbadfef5d5da8ff18a640d59 columnFamilyName info 2023-06-02 15:00:34,330 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=16 2023-06-02 15:00:34,330 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=16, state=SUCCESS; OpenRegionProcedure 871c9ceafcb24ea9b4813ed477334690, server=jenkins-hbase4.apache.org,33031,1685718018307 in 187 msec 2023-06-02 15:00:34,332 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=871c9ceafcb24ea9b4813ed477334690, ASSIGN in 345 msec 2023-06-02 15:00:34,338 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] throttle.PressureAwareThroughputController(145): 871c9ceafcb24ea9b4813ed477334690#info#compaction#36 average throughput is 17.44 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-02 15:00:34,338 DEBUG [StoreOpener-c5ae726efbadfef5d5da8ff18a640d59-1] regionserver.HStore(539): loaded hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/c5ae726efbadfef5d5da8ff18a640d59/info/14af675034614ce1802aa5032830b247.cc944a5f55c4a238ecd076504cad31b6->hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/14af675034614ce1802aa5032830b247-bottom 2023-06-02 15:00:34,338 INFO [StoreOpener-c5ae726efbadfef5d5da8ff18a640d59-1] regionserver.HStore(310): Store=c5ae726efbadfef5d5da8ff18a640d59/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 15:00:34,339 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/c5ae726efbadfef5d5da8ff18a640d59 2023-06-02 15:00:34,340 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/c5ae726efbadfef5d5da8ff18a640d59 2023-06-02 15:00:34,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c5ae726efbadfef5d5da8ff18a640d59 2023-06-02 15:00:34,344 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c5ae726efbadfef5d5da8ff18a640d59; next sequenceid=123; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=816842, jitterRate=0.03866860270500183}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-02 15:00:34,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c5ae726efbadfef5d5da8ff18a640d59: 2023-06-02 15:00:34,345 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1685718033633.c5ae726efbadfef5d5da8ff18a640d59., pid=17, masterSystemTime=1685718034292 2023-06-02 15:00:34,345 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-02 15:00:34,346 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1685718033633.c5ae726efbadfef5d5da8ff18a640d59. 2023-06-02 15:00:34,347 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1685718033633.c5ae726efbadfef5d5da8ff18a640d59. 2023-06-02 15:00:34,347 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=c5ae726efbadfef5d5da8ff18a640d59, regionState=OPEN, openSeqNum=123, regionLocation=jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:00:34,347 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1685718033633.c5ae726efbadfef5d5da8ff18a640d59.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685718034347"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685718034347"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685718034347"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685718034347"}]},"ts":"1685718034347"} 2023-06-02 15:00:34,352 DEBUG [RS:0;jenkins-hbase4:33031-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 1 store files, 0 compacting, 1 eligible, 16 blocking 2023-06-02 15:00:34,353 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-06-02 15:00:34,353 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; OpenRegionProcedure c5ae726efbadfef5d5da8ff18a640d59, server=jenkins-hbase4.apache.org,33031,1685718018307 in 209 msec 2023-06-02 15:00:34,353 INFO [RS:0;jenkins-hbase4:33031-longCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,,1685718033633.c5ae726efbadfef5d5da8ff18a640d59. 2023-06-02 15:00:34,353 DEBUG [RS:0;jenkins-hbase4:33031-longCompactions-0] regionserver.HStore(1912): c5ae726efbadfef5d5da8ff18a640d59/info is initiating minor compaction (all files) 2023-06-02 15:00:34,353 INFO [RS:0;jenkins-hbase4:33031-longCompactions-0] regionserver.HRegion(2259): Starting compaction of c5ae726efbadfef5d5da8ff18a640d59/info in TestLogRolling-testLogRolling,,1685718033633.c5ae726efbadfef5d5da8ff18a640d59. 2023-06-02 15:00:34,353 INFO [RS:0;jenkins-hbase4:33031-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/c5ae726efbadfef5d5da8ff18a640d59/info/14af675034614ce1802aa5032830b247.cc944a5f55c4a238ecd076504cad31b6->hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/14af675034614ce1802aa5032830b247-bottom] into tmpdir=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/c5ae726efbadfef5d5da8ff18a640d59/.tmp, totalSize=74.6 K 2023-06-02 15:00:34,354 DEBUG [RS:0;jenkins-hbase4:33031-longCompactions-0] compactions.Compactor(207): Compacting 14af675034614ce1802aa5032830b247.cc944a5f55c4a238ecd076504cad31b6, keycount=33, bloomtype=ROW, size=74.6 K, encoding=NONE, compression=NONE, seqNum=83, earliestPutTs=1685718029473 2023-06-02 15:00:34,356 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=12 2023-06-02 15:00:34,356 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c5ae726efbadfef5d5da8ff18a640d59, ASSIGN in 368 msec 2023-06-02 15:00:34,358 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=cc944a5f55c4a238ecd076504cad31b6, daughterA=c5ae726efbadfef5d5da8ff18a640d59, daughterB=871c9ceafcb24ea9b4813ed477334690 in 723 msec 2023-06-02 15:00:34,369 INFO [RS:0;jenkins-hbase4:33031-longCompactions-0] throttle.PressureAwareThroughputController(145): c5ae726efbadfef5d5da8ff18a640d59#info#compaction#37 average throughput is 20.87 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-02 15:00:34,384 DEBUG [RS:0;jenkins-hbase4:33031-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/c5ae726efbadfef5d5da8ff18a640d59/.tmp/info/e3f9e2767b6c42d69fbf1e78efca76cd as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/c5ae726efbadfef5d5da8ff18a640d59/info/e3f9e2767b6c42d69fbf1e78efca76cd 2023-06-02 15:00:34,390 INFO [RS:0;jenkins-hbase4:33031-longCompactions-0] regionserver.HStore(1652): Completed compaction of 1 (all) file(s) in c5ae726efbadfef5d5da8ff18a640d59/info of c5ae726efbadfef5d5da8ff18a640d59 into e3f9e2767b6c42d69fbf1e78efca76cd(size=69.1 K), total size for store is 69.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-02 15:00:34,390 DEBUG [RS:0;jenkins-hbase4:33031-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for c5ae726efbadfef5d5da8ff18a640d59: 2023-06-02 15:00:34,390 INFO [RS:0;jenkins-hbase4:33031-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685718033633.c5ae726efbadfef5d5da8ff18a640d59., storeName=c5ae726efbadfef5d5da8ff18a640d59/info, priority=15, startTime=1685718034345; duration=0sec 2023-06-02 15:00:34,390 DEBUG [RS:0;jenkins-hbase4:33031-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-02 15:00:34,771 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/b37cd489845b4d778cccfdbd2798955f as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/b37cd489845b4d778cccfdbd2798955f 2023-06-02 15:00:34,776 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 871c9ceafcb24ea9b4813ed477334690/info of 871c9ceafcb24ea9b4813ed477334690 into b37cd489845b4d778cccfdbd2798955f(size=40.8 K), total size for store is 40.8 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-02 15:00:34,776 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 871c9ceafcb24ea9b4813ed477334690: 2023-06-02 15:00:34,776 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690., storeName=871c9ceafcb24ea9b4813ed477334690/info, priority=13, startTime=1685718034323; duration=0sec 2023-06-02 15:00:34,776 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-02 15:00:39,409 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-02 15:00:43,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] ipc.CallRunner(144): callId: 106 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:48144 deadline: 1685718053669, exception=org.apache.hadoop.hbase.NotServingRegionException: TestLogRolling-testLogRolling,,1685718019459.cc944a5f55c4a238ecd076504cad31b6. is not online on jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:01:04,668 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): data stats (chunk size=2097152): current pool size=2, created chunk count=13, reused chunk count=30, reuseRatio=69.77% 2023-06-02 15:01:04,669 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2023-06-02 15:01:05,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] regionserver.HRegion(9158): Flush requested on 871c9ceafcb24ea9b4813ed477334690 2023-06-02 15:01:05,725 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 871c9ceafcb24ea9b4813ed477334690 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-02 15:01:05,743 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=133 (bloomFilter=true), to=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/045a7ebeb2c84fca98d17e24cf109f16 2023-06-02 15:01:05,749 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/045a7ebeb2c84fca98d17e24cf109f16 as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/045a7ebeb2c84fca98d17e24cf109f16 2023-06-02 15:01:05,753 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=871c9ceafcb24ea9b4813ed477334690, server=jenkins-hbase4.apache.org,33031,1685718018307 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-02 15:01:05,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] ipc.CallRunner(144): callId: 139 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:48144 deadline: 1685718075753, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=871c9ceafcb24ea9b4813ed477334690, server=jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:01:05,755 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/045a7ebeb2c84fca98d17e24cf109f16, entries=7, sequenceid=133, filesize=12.1 K 2023-06-02 15:01:05,756 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=23.12 KB/23672 for 871c9ceafcb24ea9b4813ed477334690 in 31ms, sequenceid=133, compaction requested=false 2023-06-02 15:01:05,756 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 871c9ceafcb24ea9b4813ed477334690: 2023-06-02 15:01:11,678 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-02 15:01:15,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] regionserver.HRegion(9158): Flush requested on 871c9ceafcb24ea9b4813ed477334690 2023-06-02 15:01:15,856 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 871c9ceafcb24ea9b4813ed477334690 1/1 column families, dataSize=24.17 KB heapSize=26.13 KB 2023-06-02 15:01:15,868 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.17 KB at sequenceid=159 (bloomFilter=true), to=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/3df8e2954f204e4992c927e7ed730b18 2023-06-02 15:01:15,874 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/3df8e2954f204e4992c927e7ed730b18 as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/3df8e2954f204e4992c927e7ed730b18 2023-06-02 15:01:15,880 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/3df8e2954f204e4992c927e7ed730b18, entries=23, sequenceid=159, filesize=29.0 K 2023-06-02 15:01:15,881 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~24.17 KB/24748, heapSize ~26.11 KB/26736, currentSize=3.15 KB/3228 for 871c9ceafcb24ea9b4813ed477334690 in 25ms, sequenceid=159, compaction requested=true 2023-06-02 15:01:15,881 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 871c9ceafcb24ea9b4813ed477334690: 2023-06-02 15:01:15,882 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-02 15:01:15,882 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-02 15:01:15,883 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 83875 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-02 15:01:15,883 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1912): 871c9ceafcb24ea9b4813ed477334690/info is initiating minor compaction (all files) 2023-06-02 15:01:15,883 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 871c9ceafcb24ea9b4813ed477334690/info in TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690. 2023-06-02 15:01:15,883 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/b37cd489845b4d778cccfdbd2798955f, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/045a7ebeb2c84fca98d17e24cf109f16, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/3df8e2954f204e4992c927e7ed730b18] into tmpdir=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp, totalSize=81.9 K 2023-06-02 15:01:15,883 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting b37cd489845b4d778cccfdbd2798955f, keycount=34, bloomtype=ROW, size=40.8 K, encoding=NONE, compression=NONE, seqNum=119, earliestPutTs=1685718031553 2023-06-02 15:01:15,884 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting 045a7ebeb2c84fca98d17e24cf109f16, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=133, earliestPutTs=1685718063717 2023-06-02 15:01:15,884 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting 3df8e2954f204e4992c927e7ed730b18, keycount=23, bloomtype=ROW, size=29.0 K, encoding=NONE, compression=NONE, seqNum=159, earliestPutTs=1685718065725 2023-06-02 15:01:15,893 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] throttle.PressureAwareThroughputController(145): 871c9ceafcb24ea9b4813ed477334690#info#compaction#40 average throughput is 65.67 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-02 15:01:15,911 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/19715251c1b34ca8a8220355a2add41f as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/19715251c1b34ca8a8220355a2add41f 2023-06-02 15:01:15,917 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 871c9ceafcb24ea9b4813ed477334690/info of 871c9ceafcb24ea9b4813ed477334690 into 19715251c1b34ca8a8220355a2add41f(size=72.6 K), total size for store is 72.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-02 15:01:15,917 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 871c9ceafcb24ea9b4813ed477334690: 2023-06-02 15:01:15,917 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690., storeName=871c9ceafcb24ea9b4813ed477334690/info, priority=13, startTime=1685718075882; duration=0sec 2023-06-02 15:01:15,917 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-02 15:01:17,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] regionserver.HRegion(9158): Flush requested on 871c9ceafcb24ea9b4813ed477334690 2023-06-02 15:01:17,866 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 871c9ceafcb24ea9b4813ed477334690 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-02 15:01:17,875 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=170 (bloomFilter=true), to=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/d515697452c0411a8634861e3e4d84bb 2023-06-02 15:01:17,881 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/d515697452c0411a8634861e3e4d84bb as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/d515697452c0411a8634861e3e4d84bb 2023-06-02 15:01:17,887 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/d515697452c0411a8634861e3e4d84bb, entries=7, sequenceid=170, filesize=12.1 K 2023-06-02 15:01:17,888 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for 871c9ceafcb24ea9b4813ed477334690 in 22ms, sequenceid=170, compaction requested=false 2023-06-02 15:01:17,888 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 871c9ceafcb24ea9b4813ed477334690: 2023-06-02 15:01:17,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] regionserver.HRegion(9158): Flush requested on 871c9ceafcb24ea9b4813ed477334690 2023-06-02 15:01:17,888 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 871c9ceafcb24ea9b4813ed477334690 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-06-02 15:01:18,304 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=193 (bloomFilter=true), to=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/9c8a7e6e79424d139a568f895dbe772f 2023-06-02 15:01:18,310 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/9c8a7e6e79424d139a568f895dbe772f as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/9c8a7e6e79424d139a568f895dbe772f 2023-06-02 15:01:18,315 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/9c8a7e6e79424d139a568f895dbe772f, entries=20, sequenceid=193, filesize=25.8 K 2023-06-02 15:01:18,316 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=8.41 KB/8608 for 871c9ceafcb24ea9b4813ed477334690 in 428ms, sequenceid=193, compaction requested=true 2023-06-02 15:01:18,317 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 871c9ceafcb24ea9b4813ed477334690: 2023-06-02 15:01:18,317 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-02 15:01:18,317 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-02 15:01:18,318 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 113224 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-02 15:01:18,318 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1912): 871c9ceafcb24ea9b4813ed477334690/info is initiating minor compaction (all files) 2023-06-02 15:01:18,318 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 871c9ceafcb24ea9b4813ed477334690/info in TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690. 2023-06-02 15:01:18,318 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/19715251c1b34ca8a8220355a2add41f, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/d515697452c0411a8634861e3e4d84bb, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/9c8a7e6e79424d139a568f895dbe772f] into tmpdir=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp, totalSize=110.6 K 2023-06-02 15:01:18,319 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting 19715251c1b34ca8a8220355a2add41f, keycount=64, bloomtype=ROW, size=72.6 K, encoding=NONE, compression=NONE, seqNum=159, earliestPutTs=1685718031553 2023-06-02 15:01:18,319 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting d515697452c0411a8634861e3e4d84bb, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=170, earliestPutTs=1685718075857 2023-06-02 15:01:18,319 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting 9c8a7e6e79424d139a568f895dbe772f, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=193, earliestPutTs=1685718077866 2023-06-02 15:01:18,330 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] throttle.PressureAwareThroughputController(145): 871c9ceafcb24ea9b4813ed477334690#info#compaction#43 average throughput is 93.38 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-02 15:01:18,344 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/0240844bf7aa41189e915089be19a565 as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/0240844bf7aa41189e915089be19a565 2023-06-02 15:01:18,349 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 871c9ceafcb24ea9b4813ed477334690/info of 871c9ceafcb24ea9b4813ed477334690 into 0240844bf7aa41189e915089be19a565(size=101.2 K), total size for store is 101.2 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-02 15:01:18,350 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 871c9ceafcb24ea9b4813ed477334690: 2023-06-02 15:01:18,350 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690., storeName=871c9ceafcb24ea9b4813ed477334690/info, priority=13, startTime=1685718078317; duration=0sec 2023-06-02 15:01:18,350 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-02 15:01:19,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] regionserver.HRegion(9158): Flush requested on 871c9ceafcb24ea9b4813ed477334690 2023-06-02 15:01:19,898 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 871c9ceafcb24ea9b4813ed477334690 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-06-02 15:01:19,909 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=206 (bloomFilter=true), to=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/ab8a35cdf802412090679c18584b370c 2023-06-02 15:01:19,916 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/ab8a35cdf802412090679c18584b370c as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/ab8a35cdf802412090679c18584b370c 2023-06-02 15:01:19,922 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/ab8a35cdf802412090679c18584b370c, entries=9, sequenceid=206, filesize=14.2 K 2023-06-02 15:01:19,923 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=19.96 KB/20444 for 871c9ceafcb24ea9b4813ed477334690 in 24ms, sequenceid=206, compaction requested=false 2023-06-02 15:01:19,923 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 871c9ceafcb24ea9b4813ed477334690: 2023-06-02 15:01:19,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] regionserver.HRegion(9158): Flush requested on 871c9ceafcb24ea9b4813ed477334690 2023-06-02 15:01:19,923 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 871c9ceafcb24ea9b4813ed477334690 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-06-02 15:01:19,931 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=229 (bloomFilter=true), to=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/714e2133d4e848e0a432c0b262967821 2023-06-02 15:01:19,933 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=871c9ceafcb24ea9b4813ed477334690, server=jenkins-hbase4.apache.org,33031,1685718018307 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-02 15:01:19,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] ipc.CallRunner(144): callId: 207 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:48144 deadline: 1685718089933, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=871c9ceafcb24ea9b4813ed477334690, server=jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:01:19,936 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/714e2133d4e848e0a432c0b262967821 as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/714e2133d4e848e0a432c0b262967821 2023-06-02 15:01:19,940 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/714e2133d4e848e0a432c0b262967821, entries=20, sequenceid=229, filesize=25.8 K 2023-06-02 15:01:19,941 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=9.46 KB/9684 for 871c9ceafcb24ea9b4813ed477334690 in 18ms, sequenceid=229, compaction requested=true 2023-06-02 15:01:19,941 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 871c9ceafcb24ea9b4813ed477334690: 2023-06-02 15:01:19,941 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-02 15:01:19,941 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-02 15:01:19,942 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 144592 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-02 15:01:19,942 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1912): 871c9ceafcb24ea9b4813ed477334690/info is initiating minor compaction (all files) 2023-06-02 15:01:19,942 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 871c9ceafcb24ea9b4813ed477334690/info in TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690. 2023-06-02 15:01:19,942 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/0240844bf7aa41189e915089be19a565, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/ab8a35cdf802412090679c18584b370c, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/714e2133d4e848e0a432c0b262967821] into tmpdir=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp, totalSize=141.2 K 2023-06-02 15:01:19,942 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting 0240844bf7aa41189e915089be19a565, keycount=91, bloomtype=ROW, size=101.2 K, encoding=NONE, compression=NONE, seqNum=193, earliestPutTs=1685718031553 2023-06-02 15:01:19,943 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting ab8a35cdf802412090679c18584b370c, keycount=9, bloomtype=ROW, size=14.2 K, encoding=NONE, compression=NONE, seqNum=206, earliestPutTs=1685718077889 2023-06-02 15:01:19,943 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting 714e2133d4e848e0a432c0b262967821, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=229, earliestPutTs=1685718079899 2023-06-02 15:01:19,952 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] throttle.PressureAwareThroughputController(145): 871c9ceafcb24ea9b4813ed477334690#info#compaction#46 average throughput is 123.14 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-02 15:01:19,964 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/72aef597a48c49aeb84dc7f5224576f3 as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/72aef597a48c49aeb84dc7f5224576f3 2023-06-02 15:01:19,969 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 871c9ceafcb24ea9b4813ed477334690/info of 871c9ceafcb24ea9b4813ed477334690 into 72aef597a48c49aeb84dc7f5224576f3(size=131.9 K), total size for store is 131.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-02 15:01:19,969 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 871c9ceafcb24ea9b4813ed477334690: 2023-06-02 15:01:19,969 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690., storeName=871c9ceafcb24ea9b4813ed477334690/info, priority=13, startTime=1685718079941; duration=0sec 2023-06-02 15:01:19,969 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-02 15:01:30,014 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] regionserver.HRegion(9158): Flush requested on 871c9ceafcb24ea9b4813ed477334690 2023-06-02 15:01:30,014 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 871c9ceafcb24ea9b4813ed477334690 1/1 column families, dataSize=10.51 KB heapSize=11.50 KB 2023-06-02 15:01:30,025 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.51 KB at sequenceid=243 (bloomFilter=true), to=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/a5fa117419c04e18854a32b8c19301d5 2023-06-02 15:01:30,030 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/a5fa117419c04e18854a32b8c19301d5 as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/a5fa117419c04e18854a32b8c19301d5 2023-06-02 15:01:30,035 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/a5fa117419c04e18854a32b8c19301d5, entries=10, sequenceid=243, filesize=15.3 K 2023-06-02 15:01:30,036 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.51 KB/10760, heapSize ~11.48 KB/11760, currentSize=1.05 KB/1076 for 871c9ceafcb24ea9b4813ed477334690 in 22ms, sequenceid=243, compaction requested=false 2023-06-02 15:01:30,036 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 871c9ceafcb24ea9b4813ed477334690: 2023-06-02 15:01:32,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] regionserver.HRegion(9158): Flush requested on 871c9ceafcb24ea9b4813ed477334690 2023-06-02 15:01:32,023 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 871c9ceafcb24ea9b4813ed477334690 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-02 15:01:32,031 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=253 (bloomFilter=true), to=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/c9d7f8d7476b46fea6aa3330f86dcc64 2023-06-02 15:01:32,037 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/c9d7f8d7476b46fea6aa3330f86dcc64 as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/c9d7f8d7476b46fea6aa3330f86dcc64 2023-06-02 15:01:32,042 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/c9d7f8d7476b46fea6aa3330f86dcc64, entries=7, sequenceid=253, filesize=12.1 K 2023-06-02 15:01:32,043 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for 871c9ceafcb24ea9b4813ed477334690 in 20ms, sequenceid=253, compaction requested=true 2023-06-02 15:01:32,043 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 871c9ceafcb24ea9b4813ed477334690: 2023-06-02 15:01:32,043 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-02 15:01:32,043 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-02 15:01:32,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] regionserver.HRegion(9158): Flush requested on 871c9ceafcb24ea9b4813ed477334690 2023-06-02 15:01:32,044 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 871c9ceafcb24ea9b4813ed477334690 1/1 column families, dataSize=22.07 KB heapSize=23.88 KB 2023-06-02 15:01:32,044 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 163136 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-02 15:01:32,045 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1912): 871c9ceafcb24ea9b4813ed477334690/info is initiating minor compaction (all files) 2023-06-02 15:01:32,045 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 871c9ceafcb24ea9b4813ed477334690/info in TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690. 2023-06-02 15:01:32,045 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/72aef597a48c49aeb84dc7f5224576f3, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/a5fa117419c04e18854a32b8c19301d5, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/c9d7f8d7476b46fea6aa3330f86dcc64] into tmpdir=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp, totalSize=159.3 K 2023-06-02 15:01:32,045 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting 72aef597a48c49aeb84dc7f5224576f3, keycount=120, bloomtype=ROW, size=131.9 K, encoding=NONE, compression=NONE, seqNum=229, earliestPutTs=1685718031553 2023-06-02 15:01:32,046 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting a5fa117419c04e18854a32b8c19301d5, keycount=10, bloomtype=ROW, size=15.3 K, encoding=NONE, compression=NONE, seqNum=243, earliestPutTs=1685718079923 2023-06-02 15:01:32,046 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting c9d7f8d7476b46fea6aa3330f86dcc64, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=253, earliestPutTs=1685718090015 2023-06-02 15:01:32,052 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.07 KB at sequenceid=277 (bloomFilter=true), to=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/2e5c697c681a45748b167bc404c363cf 2023-06-02 15:01:32,058 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/2e5c697c681a45748b167bc404c363cf as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/2e5c697c681a45748b167bc404c363cf 2023-06-02 15:01:32,059 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] throttle.PressureAwareThroughputController(145): 871c9ceafcb24ea9b4813ed477334690#info#compaction#50 average throughput is 70.29 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-02 15:01:32,066 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/2e5c697c681a45748b167bc404c363cf, entries=21, sequenceid=277, filesize=26.9 K 2023-06-02 15:01:32,067 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~22.07 KB/22596, heapSize ~23.86 KB/24432, currentSize=5.25 KB/5380 for 871c9ceafcb24ea9b4813ed477334690 in 23ms, sequenceid=277, compaction requested=false 2023-06-02 15:01:32,067 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 871c9ceafcb24ea9b4813ed477334690: 2023-06-02 15:01:32,071 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/a3c4a0e6c0a4424c9004800790975b3a as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/a3c4a0e6c0a4424c9004800790975b3a 2023-06-02 15:01:32,075 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 871c9ceafcb24ea9b4813ed477334690/info of 871c9ceafcb24ea9b4813ed477334690 into a3c4a0e6c0a4424c9004800790975b3a(size=150.0 K), total size for store is 176.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-02 15:01:32,075 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 871c9ceafcb24ea9b4813ed477334690: 2023-06-02 15:01:32,075 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690., storeName=871c9ceafcb24ea9b4813ed477334690/info, priority=13, startTime=1685718092043; duration=0sec 2023-06-02 15:01:32,076 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-02 15:01:34,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] regionserver.HRegion(9158): Flush requested on 871c9ceafcb24ea9b4813ed477334690 2023-06-02 15:01:34,053 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 871c9ceafcb24ea9b4813ed477334690 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-02 15:01:34,063 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=288 (bloomFilter=true), to=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/fa5f8abe83fb46e892691d09022cc830 2023-06-02 15:01:34,069 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/fa5f8abe83fb46e892691d09022cc830 as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/fa5f8abe83fb46e892691d09022cc830 2023-06-02 15:01:34,075 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/fa5f8abe83fb46e892691d09022cc830, entries=7, sequenceid=288, filesize=12.1 K 2023-06-02 15:01:34,076 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=18.91 KB/19368 for 871c9ceafcb24ea9b4813ed477334690 in 23ms, sequenceid=288, compaction requested=true 2023-06-02 15:01:34,076 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 871c9ceafcb24ea9b4813ed477334690: 2023-06-02 15:01:34,076 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-02 15:01:34,076 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-02 15:01:34,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] regionserver.HRegion(9158): Flush requested on 871c9ceafcb24ea9b4813ed477334690 2023-06-02 15:01:34,077 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 871c9ceafcb24ea9b4813ed477334690 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-06-02 15:01:34,078 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 193542 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-02 15:01:34,078 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1912): 871c9ceafcb24ea9b4813ed477334690/info is initiating minor compaction (all files) 2023-06-02 15:01:34,078 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 871c9ceafcb24ea9b4813ed477334690/info in TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690. 2023-06-02 15:01:34,078 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/a3c4a0e6c0a4424c9004800790975b3a, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/2e5c697c681a45748b167bc404c363cf, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/fa5f8abe83fb46e892691d09022cc830] into tmpdir=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp, totalSize=189.0 K 2023-06-02 15:01:34,078 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting a3c4a0e6c0a4424c9004800790975b3a, keycount=137, bloomtype=ROW, size=150.0 K, encoding=NONE, compression=NONE, seqNum=253, earliestPutTs=1685718031553 2023-06-02 15:01:34,079 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting 2e5c697c681a45748b167bc404c363cf, keycount=21, bloomtype=ROW, size=26.9 K, encoding=NONE, compression=NONE, seqNum=277, earliestPutTs=1685718092023 2023-06-02 15:01:34,079 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting fa5f8abe83fb46e892691d09022cc830, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=288, earliestPutTs=1685718092045 2023-06-02 15:01:34,089 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=871c9ceafcb24ea9b4813ed477334690, server=jenkins-hbase4.apache.org,33031,1685718018307 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-02 15:01:34,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] ipc.CallRunner(144): callId: 274 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:48144 deadline: 1685718104088, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=871c9ceafcb24ea9b4813ed477334690, server=jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:01:34,097 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=311 (bloomFilter=true), to=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/4e5f9a39ebfa4ce9a499a1798f6d9964 2023-06-02 15:01:34,099 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] throttle.PressureAwareThroughputController(145): 871c9ceafcb24ea9b4813ed477334690#info#compaction#53 average throughput is 84.66 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-02 15:01:34,103 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/4e5f9a39ebfa4ce9a499a1798f6d9964 as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/4e5f9a39ebfa4ce9a499a1798f6d9964 2023-06-02 15:01:34,108 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/4e5f9a39ebfa4ce9a499a1798f6d9964, entries=20, sequenceid=311, filesize=25.8 K 2023-06-02 15:01:34,108 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/c014f4d78065452b8f87bfd749de93e6 as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/c014f4d78065452b8f87bfd749de93e6 2023-06-02 15:01:34,109 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=9.46 KB/9684 for 871c9ceafcb24ea9b4813ed477334690 in 32ms, sequenceid=311, compaction requested=false 2023-06-02 15:01:34,109 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 871c9ceafcb24ea9b4813ed477334690: 2023-06-02 15:01:34,114 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 871c9ceafcb24ea9b4813ed477334690/info of 871c9ceafcb24ea9b4813ed477334690 into c014f4d78065452b8f87bfd749de93e6(size=179.6 K), total size for store is 205.4 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-02 15:01:34,114 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 871c9ceafcb24ea9b4813ed477334690: 2023-06-02 15:01:34,115 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690., storeName=871c9ceafcb24ea9b4813ed477334690/info, priority=13, startTime=1685718094076; duration=0sec 2023-06-02 15:01:34,115 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-02 15:01:44,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33031] regionserver.HRegion(9158): Flush requested on 871c9ceafcb24ea9b4813ed477334690 2023-06-02 15:01:44,135 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 871c9ceafcb24ea9b4813ed477334690 1/1 column families, dataSize=10.51 KB heapSize=11.50 KB 2023-06-02 15:01:44,145 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.51 KB at sequenceid=325 (bloomFilter=true), to=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/4df5768ed372450db605616371b140a6 2023-06-02 15:01:44,151 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/4df5768ed372450db605616371b140a6 as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/4df5768ed372450db605616371b140a6 2023-06-02 15:01:44,157 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/4df5768ed372450db605616371b140a6, entries=10, sequenceid=325, filesize=15.3 K 2023-06-02 15:01:44,158 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.51 KB/10760, heapSize ~11.48 KB/11760, currentSize=0 B/0 for 871c9ceafcb24ea9b4813ed477334690 in 23ms, sequenceid=325, compaction requested=true 2023-06-02 15:01:44,158 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 871c9ceafcb24ea9b4813ed477334690: 2023-06-02 15:01:44,158 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-02 15:01:44,158 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-02 15:01:44,159 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 226022 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-02 15:01:44,159 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1912): 871c9ceafcb24ea9b4813ed477334690/info is initiating minor compaction (all files) 2023-06-02 15:01:44,159 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 871c9ceafcb24ea9b4813ed477334690/info in TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690. 2023-06-02 15:01:44,159 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/c014f4d78065452b8f87bfd749de93e6, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/4e5f9a39ebfa4ce9a499a1798f6d9964, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/4df5768ed372450db605616371b140a6] into tmpdir=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp, totalSize=220.7 K 2023-06-02 15:01:44,160 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting c014f4d78065452b8f87bfd749de93e6, keycount=165, bloomtype=ROW, size=179.6 K, encoding=NONE, compression=NONE, seqNum=288, earliestPutTs=1685718031553 2023-06-02 15:01:44,160 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting 4e5f9a39ebfa4ce9a499a1798f6d9964, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=311, earliestPutTs=1685718094053 2023-06-02 15:01:44,160 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] compactions.Compactor(207): Compacting 4df5768ed372450db605616371b140a6, keycount=10, bloomtype=ROW, size=15.3 K, encoding=NONE, compression=NONE, seqNum=325, earliestPutTs=1685718094078 2023-06-02 15:01:44,171 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] throttle.PressureAwareThroughputController(145): 871c9ceafcb24ea9b4813ed477334690#info#compaction#55 average throughput is 100.05 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-02 15:01:44,182 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/.tmp/info/886a5f771776468f9e9fd248a63aa1a5 as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/886a5f771776468f9e9fd248a63aa1a5 2023-06-02 15:01:44,187 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 871c9ceafcb24ea9b4813ed477334690/info of 871c9ceafcb24ea9b4813ed477334690 into 886a5f771776468f9e9fd248a63aa1a5(size=211.4 K), total size for store is 211.4 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-02 15:01:44,187 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 871c9ceafcb24ea9b4813ed477334690: 2023-06-02 15:01:44,187 INFO [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690., storeName=871c9ceafcb24ea9b4813ed477334690/info, priority=13, startTime=1685718104158; duration=0sec 2023-06-02 15:01:44,187 DEBUG [RS:0;jenkins-hbase4:33031-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-02 15:01:46,135 INFO [Listener at localhost/36281] wal.AbstractTestLogRolling(188): after writing there are 0 log files 2023-06-02 15:01:46,151 INFO [Listener at localhost/36281] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/WALs/jenkins-hbase4.apache.org,33031,1685718018307/jenkins-hbase4.apache.org%2C33031%2C1685718018307.1685718018698 with entries=311, filesize=307.65 KB; new WAL /user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/WALs/jenkins-hbase4.apache.org,33031,1685718018307/jenkins-hbase4.apache.org%2C33031%2C1685718018307.1685718106136 2023-06-02 15:01:46,151 DEBUG [Listener at localhost/36281] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44647,DS-dbb74b93-f08d-47ea-8807-48b4feda1ebf,DISK], DatanodeInfoWithStorage[127.0.0.1:38357,DS-3771fdbc-37b4-4ce1-b4f7-fc631e12f1bf,DISK]] 2023-06-02 15:01:46,151 DEBUG [Listener at localhost/36281] wal.AbstractFSWAL(716): hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/WALs/jenkins-hbase4.apache.org,33031,1685718018307/jenkins-hbase4.apache.org%2C33031%2C1685718018307.1685718018698 is not closed yet, will try archiving it next time 2023-06-02 15:01:46,156 DEBUG [Listener at localhost/36281] regionserver.HRegion(2446): Flush status journal for c5ae726efbadfef5d5da8ff18a640d59: 2023-06-02 15:01:46,156 INFO [Listener at localhost/36281] regionserver.HRegion(2745): Flushing e02dc5cf8c3d540f6f90d3da5ac78586 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-02 15:01:46,165 INFO [Listener at localhost/36281] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/namespace/e02dc5cf8c3d540f6f90d3da5ac78586/.tmp/info/0b1e272247544f7d9712b29af95e5f84 2023-06-02 15:01:46,170 DEBUG [Listener at localhost/36281] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/namespace/e02dc5cf8c3d540f6f90d3da5ac78586/.tmp/info/0b1e272247544f7d9712b29af95e5f84 as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/namespace/e02dc5cf8c3d540f6f90d3da5ac78586/info/0b1e272247544f7d9712b29af95e5f84 2023-06-02 15:01:46,174 INFO [Listener at localhost/36281] regionserver.HStore(1080): Added hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/namespace/e02dc5cf8c3d540f6f90d3da5ac78586/info/0b1e272247544f7d9712b29af95e5f84, entries=2, sequenceid=6, filesize=4.8 K 2023-06-02 15:01:46,175 INFO [Listener at localhost/36281] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for e02dc5cf8c3d540f6f90d3da5ac78586 in 19ms, sequenceid=6, compaction requested=false 2023-06-02 15:01:46,176 DEBUG [Listener at localhost/36281] regionserver.HRegion(2446): Flush status journal for e02dc5cf8c3d540f6f90d3da5ac78586: 2023-06-02 15:01:46,176 INFO [Listener at localhost/36281] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.26 KB heapSize=4.19 KB 2023-06-02 15:01:46,193 INFO [Listener at localhost/36281] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.26 KB at sequenceid=24 (bloomFilter=false), to=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740/.tmp/info/9f2212549dc2406ab479328df512f91b 2023-06-02 15:01:46,198 DEBUG [Listener at localhost/36281] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740/.tmp/info/9f2212549dc2406ab479328df512f91b as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740/info/9f2212549dc2406ab479328df512f91b 2023-06-02 15:01:46,202 INFO [Listener at localhost/36281] regionserver.HStore(1080): Added hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740/info/9f2212549dc2406ab479328df512f91b, entries=16, sequenceid=24, filesize=7.0 K 2023-06-02 15:01:46,203 INFO [Listener at localhost/36281] regionserver.HRegion(2948): Finished flush of dataSize ~2.26 KB/2312, heapSize ~3.67 KB/3760, currentSize=0 B/0 for 1588230740 in 27ms, sequenceid=24, compaction requested=false 2023-06-02 15:01:46,203 DEBUG [Listener at localhost/36281] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-06-02 15:01:46,203 DEBUG [Listener at localhost/36281] regionserver.HRegion(2446): Flush status journal for 871c9ceafcb24ea9b4813ed477334690: 2023-06-02 15:01:46,209 INFO [Listener at localhost/36281] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/WALs/jenkins-hbase4.apache.org,33031,1685718018307/jenkins-hbase4.apache.org%2C33031%2C1685718018307.1685718106136 with entries=2, filesize=607 B; new WAL /user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/WALs/jenkins-hbase4.apache.org,33031,1685718018307/jenkins-hbase4.apache.org%2C33031%2C1685718018307.1685718106203 2023-06-02 15:01:46,210 DEBUG [Listener at localhost/36281] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44647,DS-dbb74b93-f08d-47ea-8807-48b4feda1ebf,DISK], DatanodeInfoWithStorage[127.0.0.1:38357,DS-3771fdbc-37b4-4ce1-b4f7-fc631e12f1bf,DISK]] 2023-06-02 15:01:46,210 DEBUG [Listener at localhost/36281] wal.AbstractFSWAL(716): hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/WALs/jenkins-hbase4.apache.org,33031,1685718018307/jenkins-hbase4.apache.org%2C33031%2C1685718018307.1685718106136 is not closed yet, will try archiving it next time 2023-06-02 15:01:46,210 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/WALs/jenkins-hbase4.apache.org,33031,1685718018307/jenkins-hbase4.apache.org%2C33031%2C1685718018307.1685718018698 to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/oldWALs/jenkins-hbase4.apache.org%2C33031%2C1685718018307.1685718018698 2023-06-02 15:01:46,211 INFO [Listener at localhost/36281] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-06-02 15:01:46,213 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/WALs/jenkins-hbase4.apache.org,33031,1685718018307/jenkins-hbase4.apache.org%2C33031%2C1685718018307.1685718106136 to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/oldWALs/jenkins-hbase4.apache.org%2C33031%2C1685718018307.1685718106136 2023-06-02 15:01:46,311 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-02 15:01:46,311 INFO [Listener at localhost/36281] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-06-02 15:01:46,311 DEBUG [Listener at localhost/36281] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x29e2f867 to 127.0.0.1:58021 2023-06-02 15:01:46,311 DEBUG [Listener at localhost/36281] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 15:01:46,312 DEBUG [Listener at localhost/36281] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-02 15:01:46,312 DEBUG [Listener at localhost/36281] util.JVMClusterUtil(257): Found active master hash=724177974, stopped=false 2023-06-02 15:01:46,312 INFO [Listener at localhost/36281] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,44995,1685718018268 2023-06-02 15:01:46,314 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-02 15:01:46,314 INFO [Listener at localhost/36281] procedure2.ProcedureExecutor(629): Stopping 2023-06-02 15:01:46,314 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 15:01:46,314 DEBUG [Listener at localhost/36281] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x47f9c329 to 127.0.0.1:58021 2023-06-02 15:01:46,314 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 15:01:46,314 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): regionserver:33031-0x1008c0e2bd00001, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-02 15:01:46,314 DEBUG [Listener at localhost/36281] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 15:01:46,315 INFO [Listener at localhost/36281] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,33031,1685718018307' ***** 2023-06-02 15:01:46,315 INFO [Listener at localhost/36281] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-02 15:01:46,316 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33031-0x1008c0e2bd00001, quorum=127.0.0.1:58021, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 15:01:46,316 INFO [RS:0;jenkins-hbase4:33031] regionserver.HeapMemoryManager(220): Stopping 2023-06-02 15:01:46,316 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-02 15:01:46,316 INFO [RS:0;jenkins-hbase4:33031] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-02 15:01:46,316 INFO [RS:0;jenkins-hbase4:33031] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-02 15:01:46,316 INFO [RS:0;jenkins-hbase4:33031] regionserver.HRegionServer(3303): Received CLOSE for c5ae726efbadfef5d5da8ff18a640d59 2023-06-02 15:01:46,316 INFO [RS:0;jenkins-hbase4:33031] regionserver.HRegionServer(3303): Received CLOSE for e02dc5cf8c3d540f6f90d3da5ac78586 2023-06-02 15:01:46,316 INFO [RS:0;jenkins-hbase4:33031] regionserver.HRegionServer(3303): Received CLOSE for 871c9ceafcb24ea9b4813ed477334690 2023-06-02 15:01:46,316 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c5ae726efbadfef5d5da8ff18a640d59, disabling compactions & flushes 2023-06-02 15:01:46,316 INFO [RS:0;jenkins-hbase4:33031] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:01:46,316 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685718033633.c5ae726efbadfef5d5da8ff18a640d59. 2023-06-02 15:01:46,316 DEBUG [RS:0;jenkins-hbase4:33031] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x10815870 to 127.0.0.1:58021 2023-06-02 15:01:46,316 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685718033633.c5ae726efbadfef5d5da8ff18a640d59. 2023-06-02 15:01:46,317 DEBUG [RS:0;jenkins-hbase4:33031] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 15:01:46,317 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685718033633.c5ae726efbadfef5d5da8ff18a640d59. after waiting 0 ms 2023-06-02 15:01:46,317 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685718033633.c5ae726efbadfef5d5da8ff18a640d59. 2023-06-02 15:01:46,317 INFO [RS:0;jenkins-hbase4:33031] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-02 15:01:46,317 INFO [RS:0;jenkins-hbase4:33031] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-02 15:01:46,317 INFO [RS:0;jenkins-hbase4:33031] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-02 15:01:46,317 INFO [RS:0;jenkins-hbase4:33031] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-02 15:01:46,317 INFO [RS:0;jenkins-hbase4:33031] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-06-02 15:01:46,318 DEBUG [RS:0;jenkins-hbase4:33031] regionserver.HRegionServer(1478): Online Regions={c5ae726efbadfef5d5da8ff18a640d59=TestLogRolling-testLogRolling,,1685718033633.c5ae726efbadfef5d5da8ff18a640d59., e02dc5cf8c3d540f6f90d3da5ac78586=hbase:namespace,,1685718018950.e02dc5cf8c3d540f6f90d3da5ac78586., 1588230740=hbase:meta,,1.1588230740, 871c9ceafcb24ea9b4813ed477334690=TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.} 2023-06-02 15:01:46,318 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-02 15:01:46,318 DEBUG [RS:0;jenkins-hbase4:33031] regionserver.HRegionServer(1504): Waiting on 1588230740, 871c9ceafcb24ea9b4813ed477334690, c5ae726efbadfef5d5da8ff18a640d59, e02dc5cf8c3d540f6f90d3da5ac78586 2023-06-02 15:01:46,318 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-02 15:01:46,318 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685718033633.c5ae726efbadfef5d5da8ff18a640d59.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/c5ae726efbadfef5d5da8ff18a640d59/info/14af675034614ce1802aa5032830b247.cc944a5f55c4a238ecd076504cad31b6->hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/14af675034614ce1802aa5032830b247-bottom] to archive 2023-06-02 15:01:46,318 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-02 15:01:46,318 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-02 15:01:46,318 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-02 15:01:46,321 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685718033633.c5ae726efbadfef5d5da8ff18a640d59.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-02 15:01:46,324 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685718033633.c5ae726efbadfef5d5da8ff18a640d59.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/c5ae726efbadfef5d5da8ff18a640d59/info/14af675034614ce1802aa5032830b247.cc944a5f55c4a238ecd076504cad31b6 to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/c5ae726efbadfef5d5da8ff18a640d59/info/14af675034614ce1802aa5032830b247.cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:01:46,327 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/meta/1588230740/recovered.edits/27.seqid, newMaxSeqId=27, maxSeqId=1 2023-06-02 15:01:46,328 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-02 15:01:46,328 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-02 15:01:46,328 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-02 15:01:46,328 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-02 15:01:46,330 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/c5ae726efbadfef5d5da8ff18a640d59/recovered.edits/127.seqid, newMaxSeqId=127, maxSeqId=122 2023-06-02 15:01:46,331 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685718033633.c5ae726efbadfef5d5da8ff18a640d59. 2023-06-02 15:01:46,331 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c5ae726efbadfef5d5da8ff18a640d59: 2023-06-02 15:01:46,331 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,,1685718033633.c5ae726efbadfef5d5da8ff18a640d59. 2023-06-02 15:01:46,331 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e02dc5cf8c3d540f6f90d3da5ac78586, disabling compactions & flushes 2023-06-02 15:01:46,331 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685718018950.e02dc5cf8c3d540f6f90d3da5ac78586. 2023-06-02 15:01:46,331 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685718018950.e02dc5cf8c3d540f6f90d3da5ac78586. 2023-06-02 15:01:46,331 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685718018950.e02dc5cf8c3d540f6f90d3da5ac78586. after waiting 0 ms 2023-06-02 15:01:46,331 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685718018950.e02dc5cf8c3d540f6f90d3da5ac78586. 2023-06-02 15:01:46,335 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/hbase/namespace/e02dc5cf8c3d540f6f90d3da5ac78586/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-02 15:01:46,336 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685718018950.e02dc5cf8c3d540f6f90d3da5ac78586. 2023-06-02 15:01:46,336 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e02dc5cf8c3d540f6f90d3da5ac78586: 2023-06-02 15:01:46,336 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685718018950.e02dc5cf8c3d540f6f90d3da5ac78586. 2023-06-02 15:01:46,336 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 871c9ceafcb24ea9b4813ed477334690, disabling compactions & flushes 2023-06-02 15:01:46,336 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690. 2023-06-02 15:01:46,336 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690. 2023-06-02 15:01:46,336 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690. after waiting 0 ms 2023-06-02 15:01:46,336 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690. 2023-06-02 15:01:46,350 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/14af675034614ce1802aa5032830b247.cc944a5f55c4a238ecd076504cad31b6->hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/cc944a5f55c4a238ecd076504cad31b6/info/14af675034614ce1802aa5032830b247-top, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/TestLogRolling-testLogRolling=cc944a5f55c4a238ecd076504cad31b6-2e006582159c4536b876597851bb23de, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/b37cd489845b4d778cccfdbd2798955f, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/TestLogRolling-testLogRolling=cc944a5f55c4a238ecd076504cad31b6-f618aeb30b85463fac8535849397a59f, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/045a7ebeb2c84fca98d17e24cf109f16, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/19715251c1b34ca8a8220355a2add41f, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/3df8e2954f204e4992c927e7ed730b18, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/d515697452c0411a8634861e3e4d84bb, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/0240844bf7aa41189e915089be19a565, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/9c8a7e6e79424d139a568f895dbe772f, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/ab8a35cdf802412090679c18584b370c, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/72aef597a48c49aeb84dc7f5224576f3, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/714e2133d4e848e0a432c0b262967821, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/a5fa117419c04e18854a32b8c19301d5, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/a3c4a0e6c0a4424c9004800790975b3a, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/c9d7f8d7476b46fea6aa3330f86dcc64, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/2e5c697c681a45748b167bc404c363cf, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/c014f4d78065452b8f87bfd749de93e6, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/fa5f8abe83fb46e892691d09022cc830, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/4e5f9a39ebfa4ce9a499a1798f6d9964, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/4df5768ed372450db605616371b140a6] to archive 2023-06-02 15:01:46,350 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-02 15:01:46,352 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/14af675034614ce1802aa5032830b247.cc944a5f55c4a238ecd076504cad31b6 to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/14af675034614ce1802aa5032830b247.cc944a5f55c4a238ecd076504cad31b6 2023-06-02 15:01:46,353 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/TestLogRolling-testLogRolling=cc944a5f55c4a238ecd076504cad31b6-2e006582159c4536b876597851bb23de to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/TestLogRolling-testLogRolling=cc944a5f55c4a238ecd076504cad31b6-2e006582159c4536b876597851bb23de 2023-06-02 15:01:46,355 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/b37cd489845b4d778cccfdbd2798955f to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/b37cd489845b4d778cccfdbd2798955f 2023-06-02 15:01:46,356 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/TestLogRolling-testLogRolling=cc944a5f55c4a238ecd076504cad31b6-f618aeb30b85463fac8535849397a59f to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/TestLogRolling-testLogRolling=cc944a5f55c4a238ecd076504cad31b6-f618aeb30b85463fac8535849397a59f 2023-06-02 15:01:46,357 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/045a7ebeb2c84fca98d17e24cf109f16 to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/045a7ebeb2c84fca98d17e24cf109f16 2023-06-02 15:01:46,358 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/19715251c1b34ca8a8220355a2add41f to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/19715251c1b34ca8a8220355a2add41f 2023-06-02 15:01:46,359 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/3df8e2954f204e4992c927e7ed730b18 to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/3df8e2954f204e4992c927e7ed730b18 2023-06-02 15:01:46,360 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/d515697452c0411a8634861e3e4d84bb to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/d515697452c0411a8634861e3e4d84bb 2023-06-02 15:01:46,361 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/0240844bf7aa41189e915089be19a565 to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/0240844bf7aa41189e915089be19a565 2023-06-02 15:01:46,363 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/9c8a7e6e79424d139a568f895dbe772f to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/9c8a7e6e79424d139a568f895dbe772f 2023-06-02 15:01:46,363 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/ab8a35cdf802412090679c18584b370c to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/ab8a35cdf802412090679c18584b370c 2023-06-02 15:01:46,364 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/72aef597a48c49aeb84dc7f5224576f3 to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/72aef597a48c49aeb84dc7f5224576f3 2023-06-02 15:01:46,365 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/714e2133d4e848e0a432c0b262967821 to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/714e2133d4e848e0a432c0b262967821 2023-06-02 15:01:46,366 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/a5fa117419c04e18854a32b8c19301d5 to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/a5fa117419c04e18854a32b8c19301d5 2023-06-02 15:01:46,367 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/a3c4a0e6c0a4424c9004800790975b3a to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/a3c4a0e6c0a4424c9004800790975b3a 2023-06-02 15:01:46,369 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/c9d7f8d7476b46fea6aa3330f86dcc64 to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/c9d7f8d7476b46fea6aa3330f86dcc64 2023-06-02 15:01:46,370 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/2e5c697c681a45748b167bc404c363cf to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/2e5c697c681a45748b167bc404c363cf 2023-06-02 15:01:46,371 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/c014f4d78065452b8f87bfd749de93e6 to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/c014f4d78065452b8f87bfd749de93e6 2023-06-02 15:01:46,372 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/fa5f8abe83fb46e892691d09022cc830 to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/fa5f8abe83fb46e892691d09022cc830 2023-06-02 15:01:46,373 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/4e5f9a39ebfa4ce9a499a1798f6d9964 to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/4e5f9a39ebfa4ce9a499a1798f6d9964 2023-06-02 15:01:46,374 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/4df5768ed372450db605616371b140a6 to hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/archive/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/info/4df5768ed372450db605616371b140a6 2023-06-02 15:01:46,378 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/data/default/TestLogRolling-testLogRolling/871c9ceafcb24ea9b4813ed477334690/recovered.edits/330.seqid, newMaxSeqId=330, maxSeqId=122 2023-06-02 15:01:46,379 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690. 2023-06-02 15:01:46,379 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 871c9ceafcb24ea9b4813ed477334690: 2023-06-02 15:01:46,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,row0062,1685718033633.871c9ceafcb24ea9b4813ed477334690. 2023-06-02 15:01:46,518 INFO [RS:0;jenkins-hbase4:33031] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33031,1685718018307; all regions closed. 2023-06-02 15:01:46,519 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/WALs/jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:01:46,524 DEBUG [RS:0;jenkins-hbase4:33031] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/oldWALs 2023-06-02 15:01:46,524 INFO [RS:0;jenkins-hbase4:33031] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C33031%2C1685718018307.meta:.meta(num 1685718018859) 2023-06-02 15:01:46,524 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/WALs/jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:01:46,530 DEBUG [RS:0;jenkins-hbase4:33031] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/oldWALs 2023-06-02 15:01:46,530 INFO [RS:0;jenkins-hbase4:33031] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C33031%2C1685718018307:(num 1685718106203) 2023-06-02 15:01:46,530 DEBUG [RS:0;jenkins-hbase4:33031] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 15:01:46,530 INFO [RS:0;jenkins-hbase4:33031] regionserver.LeaseManager(133): Closed leases 2023-06-02 15:01:46,530 INFO [RS:0;jenkins-hbase4:33031] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-02 15:01:46,530 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-02 15:01:46,531 INFO [RS:0;jenkins-hbase4:33031] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33031 2023-06-02 15:01:46,533 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-02 15:01:46,533 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): regionserver:33031-0x1008c0e2bd00001, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33031,1685718018307 2023-06-02 15:01:46,533 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): regionserver:33031-0x1008c0e2bd00001, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-02 15:01:46,533 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33031,1685718018307] 2023-06-02 15:01:46,533 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33031,1685718018307; numProcessing=1 2023-06-02 15:01:46,536 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33031,1685718018307 already deleted, retry=false 2023-06-02 15:01:46,536 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33031,1685718018307 expired; onlineServers=0 2023-06-02 15:01:46,536 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,44995,1685718018268' ***** 2023-06-02 15:01:46,536 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-02 15:01:46,536 DEBUG [M:0;jenkins-hbase4:44995] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2f15858b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-02 15:01:46,537 INFO [M:0;jenkins-hbase4:44995] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44995,1685718018268 2023-06-02 15:01:46,537 INFO [M:0;jenkins-hbase4:44995] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44995,1685718018268; all regions closed. 2023-06-02 15:01:46,537 DEBUG [M:0;jenkins-hbase4:44995] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 15:01:46,537 DEBUG [M:0;jenkins-hbase4:44995] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-02 15:01:46,537 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-02 15:01:46,537 DEBUG [M:0;jenkins-hbase4:44995] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-02 15:01:46,537 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685718018502] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685718018502,5,FailOnTimeoutGroup] 2023-06-02 15:01:46,537 INFO [M:0;jenkins-hbase4:44995] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-02 15:01:46,537 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685718018501] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685718018501,5,FailOnTimeoutGroup] 2023-06-02 15:01:46,538 INFO [M:0;jenkins-hbase4:44995] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-02 15:01:46,538 INFO [M:0;jenkins-hbase4:44995] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-06-02 15:01:46,538 DEBUG [M:0;jenkins-hbase4:44995] master.HMaster(1512): Stopping service threads 2023-06-02 15:01:46,538 INFO [M:0;jenkins-hbase4:44995] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-02 15:01:46,539 ERROR [M:0;jenkins-hbase4:44995] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-02 15:01:46,539 INFO [M:0;jenkins-hbase4:44995] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-02 15:01:46,539 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-02 15:01:46,539 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-02 15:01:46,539 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 15:01:46,539 DEBUG [M:0;jenkins-hbase4:44995] zookeeper.ZKUtil(398): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-02 15:01:46,539 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-02 15:01:46,539 WARN [M:0;jenkins-hbase4:44995] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-02 15:01:46,539 INFO [M:0;jenkins-hbase4:44995] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-02 15:01:46,540 INFO [M:0;jenkins-hbase4:44995] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-02 15:01:46,540 DEBUG [M:0;jenkins-hbase4:44995] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-02 15:01:46,540 INFO [M:0;jenkins-hbase4:44995] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 15:01:46,540 DEBUG [M:0;jenkins-hbase4:44995] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 15:01:46,540 DEBUG [M:0;jenkins-hbase4:44995] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-02 15:01:46,540 DEBUG [M:0;jenkins-hbase4:44995] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 15:01:46,540 INFO [M:0;jenkins-hbase4:44995] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=64.70 KB heapSize=78.42 KB 2023-06-02 15:01:46,549 INFO [M:0;jenkins-hbase4:44995] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=64.70 KB at sequenceid=160 (bloomFilter=true), to=hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/b28273af585d45c6852df6d78f9b367f 2023-06-02 15:01:46,554 INFO [M:0;jenkins-hbase4:44995] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b28273af585d45c6852df6d78f9b367f 2023-06-02 15:01:46,556 DEBUG [M:0;jenkins-hbase4:44995] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/b28273af585d45c6852df6d78f9b367f as hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/b28273af585d45c6852df6d78f9b367f 2023-06-02 15:01:46,560 INFO [M:0;jenkins-hbase4:44995] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b28273af585d45c6852df6d78f9b367f 2023-06-02 15:01:46,560 INFO [M:0;jenkins-hbase4:44995] regionserver.HStore(1080): Added hdfs://localhost:45467/user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/b28273af585d45c6852df6d78f9b367f, entries=18, sequenceid=160, filesize=6.9 K 2023-06-02 15:01:46,561 INFO [M:0;jenkins-hbase4:44995] regionserver.HRegion(2948): Finished flush of dataSize ~64.70 KB/66256, heapSize ~78.41 KB/80288, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 21ms, sequenceid=160, compaction requested=false 2023-06-02 15:01:46,562 INFO [M:0;jenkins-hbase4:44995] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 15:01:46,562 DEBUG [M:0;jenkins-hbase4:44995] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-02 15:01:46,562 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/94db1a76-5363-856f-6a24-9620479b9820/MasterData/WALs/jenkins-hbase4.apache.org,44995,1685718018268 2023-06-02 15:01:46,565 INFO [M:0;jenkins-hbase4:44995] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-02 15:01:46,565 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-02 15:01:46,566 INFO [M:0;jenkins-hbase4:44995] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44995 2023-06-02 15:01:46,568 DEBUG [M:0;jenkins-hbase4:44995] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,44995,1685718018268 already deleted, retry=false 2023-06-02 15:01:46,576 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-02 15:01:46,634 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): regionserver:33031-0x1008c0e2bd00001, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 15:01:46,634 INFO [RS:0;jenkins-hbase4:33031] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33031,1685718018307; zookeeper connection closed. 2023-06-02 15:01:46,634 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): regionserver:33031-0x1008c0e2bd00001, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 15:01:46,635 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@64d27525] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@64d27525 2023-06-02 15:01:46,635 INFO [Listener at localhost/36281] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-02 15:01:46,734 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 15:01:46,734 INFO [M:0;jenkins-hbase4:44995] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44995,1685718018268; zookeeper connection closed. 2023-06-02 15:01:46,734 DEBUG [Listener at localhost/36281-EventThread] zookeeper.ZKWatcher(600): master:44995-0x1008c0e2bd00000, quorum=127.0.0.1:58021, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 15:01:46,735 WARN [Listener at localhost/36281] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-02 15:01:46,739 INFO [Listener at localhost/36281] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 15:01:46,844 WARN [BP-715505891-172.31.14.131-1685718017731 heartbeating to localhost/127.0.0.1:45467] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-02 15:01:46,844 WARN [BP-715505891-172.31.14.131-1685718017731 heartbeating to localhost/127.0.0.1:45467] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-715505891-172.31.14.131-1685718017731 (Datanode Uuid 7be8d142-059f-4428-8946-e727ab229074) service to localhost/127.0.0.1:45467 2023-06-02 15:01:46,845 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/cluster_1f14afc9-391e-9c70-7f14-bad7c89105c7/dfs/data/data3/current/BP-715505891-172.31.14.131-1685718017731] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 15:01:46,845 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/cluster_1f14afc9-391e-9c70-7f14-bad7c89105c7/dfs/data/data4/current/BP-715505891-172.31.14.131-1685718017731] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 15:01:46,846 WARN [Listener at localhost/36281] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-02 15:01:46,853 INFO [Listener at localhost/36281] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 15:01:46,957 WARN [BP-715505891-172.31.14.131-1685718017731 heartbeating to localhost/127.0.0.1:45467] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-02 15:01:46,957 WARN [BP-715505891-172.31.14.131-1685718017731 heartbeating to localhost/127.0.0.1:45467] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-715505891-172.31.14.131-1685718017731 (Datanode Uuid 59522bee-33a2-4a4f-bda7-39e4c321eb00) service to localhost/127.0.0.1:45467 2023-06-02 15:01:46,957 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/cluster_1f14afc9-391e-9c70-7f14-bad7c89105c7/dfs/data/data1/current/BP-715505891-172.31.14.131-1685718017731] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 15:01:46,958 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/cluster_1f14afc9-391e-9c70-7f14-bad7c89105c7/dfs/data/data2/current/BP-715505891-172.31.14.131-1685718017731] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 15:01:46,970 INFO [Listener at localhost/36281] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 15:01:47,089 INFO [Listener at localhost/36281] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-02 15:01:47,146 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-02 15:01:47,156 INFO [Listener at localhost/36281] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRolling Thread=106 (was 93) - Thread LEAK? -, OpenFileDescriptor=533 (was 503) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=27 (was 51), ProcessCount=170 (was 170), AvailableMemoryMB=304 (was 578) 2023-06-02 15:01:47,165 INFO [Listener at localhost/36281] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=106, OpenFileDescriptor=533, MaxFileDescriptor=60000, SystemLoadAverage=27, ProcessCount=170, AvailableMemoryMB=304 2023-06-02 15:01:47,165 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-02 15:01:47,165 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/hadoop.log.dir so I do NOT create it in target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863 2023-06-02 15:01:47,165 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/6a85b405-18d7-c006-e8ec-58fe63e26c81/hadoop.tmp.dir so I do NOT create it in target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863 2023-06-02 15:01:47,165 INFO [Listener at localhost/36281] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/cluster_c3b0d7b0-8d07-6cc9-95dc-14144e3726e6, deleteOnExit=true 2023-06-02 15:01:47,165 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-02 15:01:47,165 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/test.cache.data in system properties and HBase conf 2023-06-02 15:01:47,165 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/hadoop.tmp.dir in system properties and HBase conf 2023-06-02 15:01:47,165 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/hadoop.log.dir in system properties and HBase conf 2023-06-02 15:01:47,166 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-02 15:01:47,166 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-02 15:01:47,166 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-02 15:01:47,166 DEBUG [Listener at localhost/36281] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-02 15:01:47,166 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-02 15:01:47,166 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-02 15:01:47,166 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-02 15:01:47,166 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-02 15:01:47,166 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-02 15:01:47,167 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-02 15:01:47,167 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-02 15:01:47,167 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-02 15:01:47,167 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-02 15:01:47,167 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/nfs.dump.dir in system properties and HBase conf 2023-06-02 15:01:47,167 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/java.io.tmpdir in system properties and HBase conf 2023-06-02 15:01:47,167 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-02 15:01:47,167 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-02 15:01:47,167 INFO [Listener at localhost/36281] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-02 15:01:47,169 WARN [Listener at localhost/36281] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-02 15:01:47,172 WARN [Listener at localhost/36281] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-02 15:01:47,172 WARN [Listener at localhost/36281] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-02 15:01:47,215 WARN [Listener at localhost/36281] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 15:01:47,216 INFO [Listener at localhost/36281] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 15:01:47,220 INFO [Listener at localhost/36281] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/java.io.tmpdir/Jetty_localhost_38281_hdfs____m6r3rk/webapp 2023-06-02 15:01:47,310 INFO [Listener at localhost/36281] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38281 2023-06-02 15:01:47,312 WARN [Listener at localhost/36281] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-02 15:01:47,315 WARN [Listener at localhost/36281] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-02 15:01:47,315 WARN [Listener at localhost/36281] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-02 15:01:47,356 WARN [Listener at localhost/46049] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 15:01:47,369 WARN [Listener at localhost/46049] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-02 15:01:47,371 WARN [Listener at localhost/46049] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 15:01:47,372 INFO [Listener at localhost/46049] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 15:01:47,377 INFO [Listener at localhost/46049] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/java.io.tmpdir/Jetty_localhost_44041_datanode____c3ms9g/webapp 2023-06-02 15:01:47,467 INFO [Listener at localhost/46049] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44041 2023-06-02 15:01:47,475 WARN [Listener at localhost/44859] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 15:01:47,488 WARN [Listener at localhost/44859] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-02 15:01:47,491 WARN [Listener at localhost/44859] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-02 15:01:47,492 INFO [Listener at localhost/44859] log.Slf4jLog(67): jetty-6.1.26 2023-06-02 15:01:47,496 INFO [Listener at localhost/44859] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/java.io.tmpdir/Jetty_localhost_39643_datanode____gcz4hi/webapp 2023-06-02 15:01:47,571 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x88e1de697889d280: Processing first storage report for DS-f057a452-7c22-414d-ae08-d353784271ca from datanode a7dfcdc7-7a8d-4c0f-81ac-ab2bfb95976e 2023-06-02 15:01:47,571 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x88e1de697889d280: from storage DS-f057a452-7c22-414d-ae08-d353784271ca node DatanodeRegistration(127.0.0.1:44331, datanodeUuid=a7dfcdc7-7a8d-4c0f-81ac-ab2bfb95976e, infoPort=45941, infoSecurePort=0, ipcPort=44859, storageInfo=lv=-57;cid=testClusterID;nsid=693474800;c=1685718107174), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 15:01:47,571 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x88e1de697889d280: Processing first storage report for DS-c9663027-d9ce-4f89-8299-e62c44d25618 from datanode a7dfcdc7-7a8d-4c0f-81ac-ab2bfb95976e 2023-06-02 15:01:47,571 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x88e1de697889d280: from storage DS-c9663027-d9ce-4f89-8299-e62c44d25618 node DatanodeRegistration(127.0.0.1:44331, datanodeUuid=a7dfcdc7-7a8d-4c0f-81ac-ab2bfb95976e, infoPort=45941, infoSecurePort=0, ipcPort=44859, storageInfo=lv=-57;cid=testClusterID;nsid=693474800;c=1685718107174), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 15:01:47,610 INFO [Listener at localhost/44859] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39643 2023-06-02 15:01:47,616 WARN [Listener at localhost/35309] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-02 15:01:47,703 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbaeed595dac3ec6b: Processing first storage report for DS-a32b75e4-c8a5-4af4-a2d0-3ee22d3abca8 from datanode d059d7e9-85b5-4aac-9f64-5e6e96861706 2023-06-02 15:01:47,703 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbaeed595dac3ec6b: from storage DS-a32b75e4-c8a5-4af4-a2d0-3ee22d3abca8 node DatanodeRegistration(127.0.0.1:42985, datanodeUuid=d059d7e9-85b5-4aac-9f64-5e6e96861706, infoPort=36801, infoSecurePort=0, ipcPort=35309, storageInfo=lv=-57;cid=testClusterID;nsid=693474800;c=1685718107174), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 15:01:47,703 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbaeed595dac3ec6b: Processing first storage report for DS-e1e0e8df-d8da-4ef9-83c3-bfe22ca8d046 from datanode d059d7e9-85b5-4aac-9f64-5e6e96861706 2023-06-02 15:01:47,703 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbaeed595dac3ec6b: from storage DS-e1e0e8df-d8da-4ef9-83c3-bfe22ca8d046 node DatanodeRegistration(127.0.0.1:42985, datanodeUuid=d059d7e9-85b5-4aac-9f64-5e6e96861706, infoPort=36801, infoSecurePort=0, ipcPort=35309, storageInfo=lv=-57;cid=testClusterID;nsid=693474800;c=1685718107174), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-02 15:01:47,722 DEBUG [Listener at localhost/35309] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863 2023-06-02 15:01:47,724 INFO [Listener at localhost/35309] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/cluster_c3b0d7b0-8d07-6cc9-95dc-14144e3726e6/zookeeper_0, clientPort=64448, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/cluster_c3b0d7b0-8d07-6cc9-95dc-14144e3726e6/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/cluster_c3b0d7b0-8d07-6cc9-95dc-14144e3726e6/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-02 15:01:47,725 INFO [Listener at localhost/35309] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=64448 2023-06-02 15:01:47,725 INFO [Listener at localhost/35309] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 15:01:47,726 INFO [Listener at localhost/35309] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 15:01:47,738 INFO [Listener at localhost/35309] util.FSUtils(471): Created version file at hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349 with version=8 2023-06-02 15:01:47,738 INFO [Listener at localhost/35309] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:42517/user/jenkins/test-data/cadeccee-14a6-1539-fa31-a645e694a4fd/hbase-staging 2023-06-02 15:01:47,739 INFO [Listener at localhost/35309] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-06-02 15:01:47,740 INFO [Listener at localhost/35309] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 15:01:47,740 INFO [Listener at localhost/35309] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-02 15:01:47,740 INFO [Listener at localhost/35309] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-02 15:01:47,740 INFO [Listener at localhost/35309] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 15:01:47,740 INFO [Listener at localhost/35309] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-02 15:01:47,740 INFO [Listener at localhost/35309] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-02 15:01:47,741 INFO [Listener at localhost/35309] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38701 2023-06-02 15:01:47,741 INFO [Listener at localhost/35309] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 15:01:47,742 INFO [Listener at localhost/35309] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 15:01:47,743 INFO [Listener at localhost/35309] zookeeper.RecoverableZooKeeper(93): Process identifier=master:38701 connecting to ZooKeeper ensemble=127.0.0.1:64448 2023-06-02 15:01:47,750 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:387010x0, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-02 15:01:47,751 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:38701-0x1008c0f894e0000 connected 2023-06-02 15:01:47,765 DEBUG [Listener at localhost/35309] zookeeper.ZKUtil(164): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-02 15:01:47,766 DEBUG [Listener at localhost/35309] zookeeper.ZKUtil(164): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 15:01:47,766 DEBUG [Listener at localhost/35309] zookeeper.ZKUtil(164): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-02 15:01:47,766 DEBUG [Listener at localhost/35309] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38701 2023-06-02 15:01:47,767 DEBUG [Listener at localhost/35309] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38701 2023-06-02 15:01:47,767 DEBUG [Listener at localhost/35309] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38701 2023-06-02 15:01:47,767 DEBUG [Listener at localhost/35309] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38701 2023-06-02 15:01:47,767 DEBUG [Listener at localhost/35309] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38701 2023-06-02 15:01:47,767 INFO [Listener at localhost/35309] master.HMaster(444): hbase.rootdir=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349, hbase.cluster.distributed=false 2023-06-02 15:01:47,780 INFO [Listener at localhost/35309] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-06-02 15:01:47,780 INFO [Listener at localhost/35309] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 15:01:47,780 INFO [Listener at localhost/35309] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-02 15:01:47,780 INFO [Listener at localhost/35309] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-02 15:01:47,780 INFO [Listener at localhost/35309] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-02 15:01:47,780 INFO [Listener at localhost/35309] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-02 15:01:47,780 INFO [Listener at localhost/35309] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-02 15:01:47,782 INFO [Listener at localhost/35309] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33137 2023-06-02 15:01:47,782 INFO [Listener at localhost/35309] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-02 15:01:47,783 DEBUG [Listener at localhost/35309] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-02 15:01:47,783 INFO [Listener at localhost/35309] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 15:01:47,784 INFO [Listener at localhost/35309] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 15:01:47,785 INFO [Listener at localhost/35309] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33137 connecting to ZooKeeper ensemble=127.0.0.1:64448 2023-06-02 15:01:47,788 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): regionserver:331370x0, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-02 15:01:47,789 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33137-0x1008c0f894e0001 connected 2023-06-02 15:01:47,789 DEBUG [Listener at localhost/35309] zookeeper.ZKUtil(164): regionserver:33137-0x1008c0f894e0001, quorum=127.0.0.1:64448, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-02 15:01:47,790 DEBUG [Listener at localhost/35309] zookeeper.ZKUtil(164): regionserver:33137-0x1008c0f894e0001, quorum=127.0.0.1:64448, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 15:01:47,790 DEBUG [Listener at localhost/35309] zookeeper.ZKUtil(164): regionserver:33137-0x1008c0f894e0001, quorum=127.0.0.1:64448, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-02 15:01:47,793 DEBUG [Listener at localhost/35309] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33137 2023-06-02 15:01:47,794 DEBUG [Listener at localhost/35309] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33137 2023-06-02 15:01:47,795 DEBUG [Listener at localhost/35309] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33137 2023-06-02 15:01:47,795 DEBUG [Listener at localhost/35309] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33137 2023-06-02 15:01:47,795 DEBUG [Listener at localhost/35309] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33137 2023-06-02 15:01:47,799 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,38701,1685718107739 2023-06-02 15:01:47,801 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-02 15:01:47,801 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,38701,1685718107739 2023-06-02 15:01:47,803 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-02 15:01:47,803 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): regionserver:33137-0x1008c0f894e0001, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-02 15:01:47,803 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 15:01:47,803 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-02 15:01:47,804 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-02 15:01:47,804 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,38701,1685718107739 from backup master directory 2023-06-02 15:01:47,806 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,38701,1685718107739 2023-06-02 15:01:47,806 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-02 15:01:47,807 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-02 15:01:47,807 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,38701,1685718107739 2023-06-02 15:01:47,817 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/hbase.id with ID: c1002d73-02a4-427c-ab05-8c31aa44b82f 2023-06-02 15:01:47,826 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 15:01:47,828 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 15:01:47,836 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x030103b9 to 127.0.0.1:64448 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-02 15:01:47,839 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@38376960, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-02 15:01:47,839 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-02 15:01:47,840 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-02 15:01:47,840 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-02 15:01:47,841 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/MasterData/data/master/store-tmp 2023-06-02 15:01:47,847 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 15:01:47,847 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-02 15:01:47,847 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 15:01:47,847 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 15:01:47,847 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-02 15:01:47,847 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 15:01:47,847 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 15:01:47,847 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-02 15:01:47,848 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/MasterData/WALs/jenkins-hbase4.apache.org,38701,1685718107739 2023-06-02 15:01:47,849 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38701%2C1685718107739, suffix=, logDir=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/MasterData/WALs/jenkins-hbase4.apache.org,38701,1685718107739, archiveDir=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/MasterData/oldWALs, maxLogs=10 2023-06-02 15:01:47,854 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/MasterData/WALs/jenkins-hbase4.apache.org,38701,1685718107739/jenkins-hbase4.apache.org%2C38701%2C1685718107739.1685718107850 2023-06-02 15:01:47,854 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44331,DS-f057a452-7c22-414d-ae08-d353784271ca,DISK], DatanodeInfoWithStorage[127.0.0.1:42985,DS-a32b75e4-c8a5-4af4-a2d0-3ee22d3abca8,DISK]] 2023-06-02 15:01:47,854 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-02 15:01:47,854 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 15:01:47,854 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-02 15:01:47,854 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-02 15:01:47,856 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-02 15:01:47,857 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-02 15:01:47,857 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-02 15:01:47,858 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 15:01:47,858 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-02 15:01:47,859 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-02 15:01:47,860 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-02 15:01:47,862 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-02 15:01:47,862 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=849171, jitterRate=0.07977734506130219}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-02 15:01:47,862 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-02 15:01:47,862 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-02 15:01:47,863 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-02 15:01:47,863 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-02 15:01:47,863 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-02 15:01:47,864 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-02 15:01:47,864 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-02 15:01:47,864 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-02 15:01:47,865 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-02 15:01:47,865 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-02 15:01:47,876 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-02 15:01:47,876 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-02 15:01:47,876 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-02 15:01:47,876 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-02 15:01:47,877 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-02 15:01:47,879 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 15:01:47,879 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-02 15:01:47,879 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-02 15:01:47,880 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-02 15:01:47,882 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): regionserver:33137-0x1008c0f894e0001, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-02 15:01:47,882 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-02 15:01:47,882 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 15:01:47,883 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,38701,1685718107739, sessionid=0x1008c0f894e0000, setting cluster-up flag (Was=false) 2023-06-02 15:01:47,886 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 15:01:47,892 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-02 15:01:47,892 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,38701,1685718107739 2023-06-02 15:01:47,895 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 15:01:47,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-02 15:01:47,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,38701,1685718107739 2023-06-02 15:01:47,901 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/.hbase-snapshot/.tmp 2023-06-02 15:01:47,903 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-02 15:01:47,903 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-02 15:01:47,903 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-02 15:01:47,903 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-02 15:01:47,903 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-06-02 15:01:47,903 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-06-02 15:01:47,903 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 15:01:47,903 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-02 15:01:47,903 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 15:01:47,907 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685718137907 2023-06-02 15:01:47,907 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-02 15:01:47,907 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-02 15:01:47,907 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-02 15:01:47,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-02 15:01:47,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-02 15:01:47,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-02 15:01:47,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-02 15:01:47,908 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-02 15:01:47,908 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-02 15:01:47,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-02 15:01:47,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-02 15:01:47,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-02 15:01:47,909 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-02 15:01:47,909 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-02 15:01:47,909 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685718107909,5,FailOnTimeoutGroup] 2023-06-02 15:01:47,909 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685718107909,5,FailOnTimeoutGroup] 2023-06-02 15:01:47,909 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-02 15:01:47,909 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-02 15:01:47,909 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-02 15:01:47,909 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-02 15:01:47,909 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-02 15:01:47,916 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-02 15:01:47,917 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-02 15:01:47,917 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349 2023-06-02 15:01:47,922 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 15:01:47,923 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-02 15:01:47,924 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/1588230740/info 2023-06-02 15:01:47,925 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-02 15:01:47,925 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 15:01:47,925 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-02 15:01:47,926 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/1588230740/rep_barrier 2023-06-02 15:01:47,926 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-02 15:01:47,927 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 15:01:47,927 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-02 15:01:47,928 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/1588230740/table 2023-06-02 15:01:47,928 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-02 15:01:47,929 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 15:01:47,929 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/1588230740 2023-06-02 15:01:47,929 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/1588230740 2023-06-02 15:01:47,931 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-02 15:01:47,932 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-02 15:01:47,934 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-02 15:01:47,934 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=731645, jitterRate=-0.06966547667980194}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-02 15:01:47,934 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-02 15:01:47,934 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-02 15:01:47,934 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-02 15:01:47,934 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-02 15:01:47,934 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-02 15:01:47,934 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-02 15:01:47,935 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-02 15:01:47,935 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-02 15:01:47,935 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-02 15:01:47,935 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-02 15:01:47,936 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-02 15:01:47,937 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-02 15:01:47,938 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-02 15:01:48,000 INFO [RS:0;jenkins-hbase4:33137] regionserver.HRegionServer(951): ClusterId : c1002d73-02a4-427c-ab05-8c31aa44b82f 2023-06-02 15:01:48,001 DEBUG [RS:0;jenkins-hbase4:33137] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-02 15:01:48,003 DEBUG [RS:0;jenkins-hbase4:33137] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-02 15:01:48,003 DEBUG [RS:0;jenkins-hbase4:33137] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-02 15:01:48,006 DEBUG [RS:0;jenkins-hbase4:33137] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-02 15:01:48,006 DEBUG [RS:0;jenkins-hbase4:33137] zookeeper.ReadOnlyZKClient(139): Connect 0x653d4177 to 127.0.0.1:64448 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-02 15:01:48,010 DEBUG [RS:0;jenkins-hbase4:33137] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6bfab3a0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-02 15:01:48,010 DEBUG [RS:0;jenkins-hbase4:33137] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6fc52a07, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-02 15:01:48,018 DEBUG [RS:0;jenkins-hbase4:33137] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:33137 2023-06-02 15:01:48,018 INFO [RS:0;jenkins-hbase4:33137] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-02 15:01:48,018 INFO [RS:0;jenkins-hbase4:33137] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-02 15:01:48,018 DEBUG [RS:0;jenkins-hbase4:33137] regionserver.HRegionServer(1022): About to register with Master. 2023-06-02 15:01:48,019 INFO [RS:0;jenkins-hbase4:33137] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,38701,1685718107739 with isa=jenkins-hbase4.apache.org/172.31.14.131:33137, startcode=1685718107779 2023-06-02 15:01:48,019 DEBUG [RS:0;jenkins-hbase4:33137] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-02 15:01:48,021 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56347, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-06-02 15:01:48,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38701] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33137,1685718107779 2023-06-02 15:01:48,023 DEBUG [RS:0;jenkins-hbase4:33137] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349 2023-06-02 15:01:48,023 DEBUG [RS:0;jenkins-hbase4:33137] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:46049 2023-06-02 15:01:48,023 DEBUG [RS:0;jenkins-hbase4:33137] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-02 15:01:48,024 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-02 15:01:48,025 DEBUG [RS:0;jenkins-hbase4:33137] zookeeper.ZKUtil(162): regionserver:33137-0x1008c0f894e0001, quorum=127.0.0.1:64448, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33137,1685718107779 2023-06-02 15:01:48,025 WARN [RS:0;jenkins-hbase4:33137] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-02 15:01:48,025 INFO [RS:0;jenkins-hbase4:33137] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-02 15:01:48,025 DEBUG [RS:0;jenkins-hbase4:33137] regionserver.HRegionServer(1946): logDir=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/WALs/jenkins-hbase4.apache.org,33137,1685718107779 2023-06-02 15:01:48,025 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33137,1685718107779] 2023-06-02 15:01:48,033 DEBUG [RS:0;jenkins-hbase4:33137] zookeeper.ZKUtil(162): regionserver:33137-0x1008c0f894e0001, quorum=127.0.0.1:64448, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33137,1685718107779 2023-06-02 15:01:48,033 DEBUG [RS:0;jenkins-hbase4:33137] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-02 15:01:48,033 INFO [RS:0;jenkins-hbase4:33137] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-02 15:01:48,034 INFO [RS:0;jenkins-hbase4:33137] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-02 15:01:48,035 INFO [RS:0;jenkins-hbase4:33137] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-02 15:01:48,035 INFO [RS:0;jenkins-hbase4:33137] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-02 15:01:48,035 INFO [RS:0;jenkins-hbase4:33137] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-02 15:01:48,036 INFO [RS:0;jenkins-hbase4:33137] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-02 15:01:48,036 DEBUG [RS:0;jenkins-hbase4:33137] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 15:01:48,036 DEBUG [RS:0;jenkins-hbase4:33137] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 15:01:48,036 DEBUG [RS:0;jenkins-hbase4:33137] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 15:01:48,036 DEBUG [RS:0;jenkins-hbase4:33137] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 15:01:48,036 DEBUG [RS:0;jenkins-hbase4:33137] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 15:01:48,036 DEBUG [RS:0;jenkins-hbase4:33137] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-06-02 15:01:48,036 DEBUG [RS:0;jenkins-hbase4:33137] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 15:01:48,036 DEBUG [RS:0;jenkins-hbase4:33137] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 15:01:48,036 DEBUG [RS:0;jenkins-hbase4:33137] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 15:01:48,036 DEBUG [RS:0;jenkins-hbase4:33137] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-06-02 15:01:48,037 INFO [RS:0;jenkins-hbase4:33137] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-02 15:01:48,037 INFO [RS:0;jenkins-hbase4:33137] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-02 15:01:48,037 INFO [RS:0;jenkins-hbase4:33137] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-02 15:01:48,048 INFO [RS:0;jenkins-hbase4:33137] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-02 15:01:48,048 INFO [RS:0;jenkins-hbase4:33137] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33137,1685718107779-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-02 15:01:48,058 INFO [RS:0;jenkins-hbase4:33137] regionserver.Replication(203): jenkins-hbase4.apache.org,33137,1685718107779 started 2023-06-02 15:01:48,058 INFO [RS:0;jenkins-hbase4:33137] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33137,1685718107779, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33137, sessionid=0x1008c0f894e0001 2023-06-02 15:01:48,058 DEBUG [RS:0;jenkins-hbase4:33137] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-02 15:01:48,058 DEBUG [RS:0;jenkins-hbase4:33137] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33137,1685718107779 2023-06-02 15:01:48,058 DEBUG [RS:0;jenkins-hbase4:33137] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33137,1685718107779' 2023-06-02 15:01:48,058 DEBUG [RS:0;jenkins-hbase4:33137] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-02 15:01:48,058 DEBUG [RS:0;jenkins-hbase4:33137] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-02 15:01:48,059 DEBUG [RS:0;jenkins-hbase4:33137] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-02 15:01:48,059 DEBUG [RS:0;jenkins-hbase4:33137] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-02 15:01:48,059 DEBUG [RS:0;jenkins-hbase4:33137] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33137,1685718107779 2023-06-02 15:01:48,059 DEBUG [RS:0;jenkins-hbase4:33137] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33137,1685718107779' 2023-06-02 15:01:48,059 DEBUG [RS:0;jenkins-hbase4:33137] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-02 15:01:48,059 DEBUG [RS:0;jenkins-hbase4:33137] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-02 15:01:48,059 DEBUG [RS:0;jenkins-hbase4:33137] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-02 15:01:48,059 INFO [RS:0;jenkins-hbase4:33137] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-02 15:01:48,059 INFO [RS:0;jenkins-hbase4:33137] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-02 15:01:48,088 DEBUG [jenkins-hbase4:38701] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-02 15:01:48,089 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33137,1685718107779, state=OPENING 2023-06-02 15:01:48,090 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-02 15:01:48,092 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 15:01:48,093 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-02 15:01:48,093 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33137,1685718107779}] 2023-06-02 15:01:48,161 INFO [RS:0;jenkins-hbase4:33137] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33137%2C1685718107779, suffix=, logDir=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/WALs/jenkins-hbase4.apache.org,33137,1685718107779, archiveDir=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/oldWALs, maxLogs=32 2023-06-02 15:01:48,168 INFO [RS:0;jenkins-hbase4:33137] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/WALs/jenkins-hbase4.apache.org,33137,1685718107779/jenkins-hbase4.apache.org%2C33137%2C1685718107779.1685718108161 2023-06-02 15:01:48,168 DEBUG [RS:0;jenkins-hbase4:33137] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44331,DS-f057a452-7c22-414d-ae08-d353784271ca,DISK], DatanodeInfoWithStorage[127.0.0.1:42985,DS-a32b75e4-c8a5-4af4-a2d0-3ee22d3abca8,DISK]] 2023-06-02 15:01:48,246 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33137,1685718107779 2023-06-02 15:01:48,247 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-02 15:01:48,249 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40684, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-02 15:01:48,252 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-02 15:01:48,253 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-02 15:01:48,254 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33137%2C1685718107779.meta, suffix=.meta, logDir=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/WALs/jenkins-hbase4.apache.org,33137,1685718107779, archiveDir=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/oldWALs, maxLogs=32 2023-06-02 15:01:48,264 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/WALs/jenkins-hbase4.apache.org,33137,1685718107779/jenkins-hbase4.apache.org%2C33137%2C1685718107779.meta.1685718108255.meta 2023-06-02 15:01:48,264 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42985,DS-a32b75e4-c8a5-4af4-a2d0-3ee22d3abca8,DISK], DatanodeInfoWithStorage[127.0.0.1:44331,DS-f057a452-7c22-414d-ae08-d353784271ca,DISK]] 2023-06-02 15:01:48,265 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-02 15:01:48,265 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-02 15:01:48,265 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-02 15:01:48,265 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-02 15:01:48,265 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-02 15:01:48,265 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 15:01:48,265 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-02 15:01:48,265 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-02 15:01:48,267 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-02 15:01:48,268 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/1588230740/info 2023-06-02 15:01:48,268 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/1588230740/info 2023-06-02 15:01:48,268 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-02 15:01:48,269 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 15:01:48,269 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-02 15:01:48,270 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/1588230740/rep_barrier 2023-06-02 15:01:48,270 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/1588230740/rep_barrier 2023-06-02 15:01:48,270 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-02 15:01:48,271 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 15:01:48,271 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-02 15:01:48,271 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/1588230740/table 2023-06-02 15:01:48,271 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/1588230740/table 2023-06-02 15:01:48,272 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-02 15:01:48,272 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 15:01:48,273 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/1588230740 2023-06-02 15:01:48,274 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/1588230740 2023-06-02 15:01:48,275 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-02 15:01:48,276 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-02 15:01:48,277 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=786991, jitterRate=7.109493017196655E-4}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-02 15:01:48,277 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-02 15:01:48,279 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685718108246 2023-06-02 15:01:48,282 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-02 15:01:48,282 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-02 15:01:48,283 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33137,1685718107779, state=OPEN 2023-06-02 15:01:48,285 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-02 15:01:48,285 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-02 15:01:48,286 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-02 15:01:48,286 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33137,1685718107779 in 192 msec 2023-06-02 15:01:48,288 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-02 15:01:48,288 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 351 msec 2023-06-02 15:01:48,289 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 386 msec 2023-06-02 15:01:48,289 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685718108289, completionTime=-1 2023-06-02 15:01:48,290 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-02 15:01:48,290 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-02 15:01:48,292 DEBUG [hconnection-0x56813c37-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-02 15:01:48,294 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40692, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-02 15:01:48,295 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-02 15:01:48,295 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685718168295 2023-06-02 15:01:48,295 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685718228295 2023-06-02 15:01:48,295 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-06-02 15:01:48,301 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38701,1685718107739-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-02 15:01:48,301 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38701,1685718107739-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-02 15:01:48,301 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38701,1685718107739-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-02 15:01:48,301 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:38701, period=300000, unit=MILLISECONDS is enabled. 2023-06-02 15:01:48,301 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-02 15:01:48,301 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-02 15:01:48,302 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-02 15:01:48,302 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-02 15:01:48,302 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-02 15:01:48,304 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-02 15:01:48,305 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-02 15:01:48,306 DEBUG [HFileArchiver-11] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/.tmp/data/hbase/namespace/17611bb296cebb14e685d401dffa2ef1 2023-06-02 15:01:48,307 DEBUG [HFileArchiver-11] backup.HFileArchiver(153): Directory hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/.tmp/data/hbase/namespace/17611bb296cebb14e685d401dffa2ef1 empty. 2023-06-02 15:01:48,307 DEBUG [HFileArchiver-11] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/.tmp/data/hbase/namespace/17611bb296cebb14e685d401dffa2ef1 2023-06-02 15:01:48,307 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-02 15:01:48,316 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-02 15:01:48,317 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 17611bb296cebb14e685d401dffa2ef1, NAME => 'hbase:namespace,,1685718108301.17611bb296cebb14e685d401dffa2ef1.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/.tmp 2023-06-02 15:01:48,324 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685718108301.17611bb296cebb14e685d401dffa2ef1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 15:01:48,324 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 17611bb296cebb14e685d401dffa2ef1, disabling compactions & flushes 2023-06-02 15:01:48,324 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685718108301.17611bb296cebb14e685d401dffa2ef1. 2023-06-02 15:01:48,324 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685718108301.17611bb296cebb14e685d401dffa2ef1. 2023-06-02 15:01:48,324 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685718108301.17611bb296cebb14e685d401dffa2ef1. after waiting 0 ms 2023-06-02 15:01:48,324 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685718108301.17611bb296cebb14e685d401dffa2ef1. 2023-06-02 15:01:48,324 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685718108301.17611bb296cebb14e685d401dffa2ef1. 2023-06-02 15:01:48,324 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 17611bb296cebb14e685d401dffa2ef1: 2023-06-02 15:01:48,326 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-02 15:01:48,327 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685718108301.17611bb296cebb14e685d401dffa2ef1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685718108327"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685718108327"}]},"ts":"1685718108327"} 2023-06-02 15:01:48,329 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-02 15:01:48,330 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-02 15:01:48,330 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685718108330"}]},"ts":"1685718108330"} 2023-06-02 15:01:48,331 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-02 15:01:48,342 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=17611bb296cebb14e685d401dffa2ef1, ASSIGN}] 2023-06-02 15:01:48,344 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=17611bb296cebb14e685d401dffa2ef1, ASSIGN 2023-06-02 15:01:48,344 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=17611bb296cebb14e685d401dffa2ef1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33137,1685718107779; forceNewPlan=false, retain=false 2023-06-02 15:01:48,495 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=17611bb296cebb14e685d401dffa2ef1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33137,1685718107779 2023-06-02 15:01:48,496 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685718108301.17611bb296cebb14e685d401dffa2ef1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685718108495"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685718108495"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685718108495"}]},"ts":"1685718108495"} 2023-06-02 15:01:48,497 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 17611bb296cebb14e685d401dffa2ef1, server=jenkins-hbase4.apache.org,33137,1685718107779}] 2023-06-02 15:01:48,653 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685718108301.17611bb296cebb14e685d401dffa2ef1. 2023-06-02 15:01:48,653 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 17611bb296cebb14e685d401dffa2ef1, NAME => 'hbase:namespace,,1685718108301.17611bb296cebb14e685d401dffa2ef1.', STARTKEY => '', ENDKEY => ''} 2023-06-02 15:01:48,653 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 17611bb296cebb14e685d401dffa2ef1 2023-06-02 15:01:48,653 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685718108301.17611bb296cebb14e685d401dffa2ef1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-02 15:01:48,653 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 17611bb296cebb14e685d401dffa2ef1 2023-06-02 15:01:48,653 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 17611bb296cebb14e685d401dffa2ef1 2023-06-02 15:01:48,654 INFO [StoreOpener-17611bb296cebb14e685d401dffa2ef1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 17611bb296cebb14e685d401dffa2ef1 2023-06-02 15:01:48,655 DEBUG [StoreOpener-17611bb296cebb14e685d401dffa2ef1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/namespace/17611bb296cebb14e685d401dffa2ef1/info 2023-06-02 15:01:48,656 DEBUG [StoreOpener-17611bb296cebb14e685d401dffa2ef1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/namespace/17611bb296cebb14e685d401dffa2ef1/info 2023-06-02 15:01:48,656 INFO [StoreOpener-17611bb296cebb14e685d401dffa2ef1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 17611bb296cebb14e685d401dffa2ef1 columnFamilyName info 2023-06-02 15:01:48,656 INFO [StoreOpener-17611bb296cebb14e685d401dffa2ef1-1] regionserver.HStore(310): Store=17611bb296cebb14e685d401dffa2ef1/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-02 15:01:48,657 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/namespace/17611bb296cebb14e685d401dffa2ef1 2023-06-02 15:01:48,657 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/namespace/17611bb296cebb14e685d401dffa2ef1 2023-06-02 15:01:48,660 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 17611bb296cebb14e685d401dffa2ef1 2023-06-02 15:01:48,661 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/namespace/17611bb296cebb14e685d401dffa2ef1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-02 15:01:48,662 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 17611bb296cebb14e685d401dffa2ef1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=706512, jitterRate=-0.10162419080734253}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-02 15:01:48,662 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 17611bb296cebb14e685d401dffa2ef1: 2023-06-02 15:01:48,665 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685718108301.17611bb296cebb14e685d401dffa2ef1., pid=6, masterSystemTime=1685718108649 2023-06-02 15:01:48,667 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685718108301.17611bb296cebb14e685d401dffa2ef1. 2023-06-02 15:01:48,667 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685718108301.17611bb296cebb14e685d401dffa2ef1. 2023-06-02 15:01:48,667 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=17611bb296cebb14e685d401dffa2ef1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33137,1685718107779 2023-06-02 15:01:48,667 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685718108301.17611bb296cebb14e685d401dffa2ef1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685718108667"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685718108667"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685718108667"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685718108667"}]},"ts":"1685718108667"} 2023-06-02 15:01:48,670 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-02 15:01:48,670 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 17611bb296cebb14e685d401dffa2ef1, server=jenkins-hbase4.apache.org,33137,1685718107779 in 172 msec 2023-06-02 15:01:48,672 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-02 15:01:48,672 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=17611bb296cebb14e685d401dffa2ef1, ASSIGN in 329 msec 2023-06-02 15:01:48,673 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-02 15:01:48,673 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685718108673"}]},"ts":"1685718108673"} 2023-06-02 15:01:48,674 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-02 15:01:48,676 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-02 15:01:48,677 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 374 msec 2023-06-02 15:01:48,704 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-02 15:01:48,705 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-02 15:01:48,705 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 15:01:48,709 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-02 15:01:48,717 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-02 15:01:48,720 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 11 msec 2023-06-02 15:01:48,730 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-02 15:01:48,736 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-02 15:01:48,743 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-06-02 15:01:48,756 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-02 15:01:48,758 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-02 15:01:48,759 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.952sec 2023-06-02 15:01:48,759 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-02 15:01:48,759 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-02 15:01:48,759 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-02 15:01:48,759 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38701,1685718107739-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-02 15:01:48,759 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38701,1685718107739-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-02 15:01:48,760 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-02 15:01:48,800 DEBUG [Listener at localhost/35309] zookeeper.ReadOnlyZKClient(139): Connect 0x19c4aa11 to 127.0.0.1:64448 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-02 15:01:48,806 DEBUG [Listener at localhost/35309] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5f6158c2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-02 15:01:48,807 DEBUG [hconnection-0x74feadc4-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-02 15:01:48,809 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40704, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-02 15:01:48,810 INFO [Listener at localhost/35309] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,38701,1685718107739 2023-06-02 15:01:48,810 INFO [Listener at localhost/35309] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-02 15:01:48,813 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-02 15:01:48,813 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 15:01:48,814 INFO [Listener at localhost/35309] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-02 15:01:48,814 INFO [Listener at localhost/35309] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-02 15:01:48,816 INFO [Listener at localhost/35309] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=test.com%2C8080%2C1, suffix=, logDir=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/WALs/test.com,8080,1, archiveDir=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/oldWALs, maxLogs=32 2023-06-02 15:01:48,822 INFO [Listener at localhost/35309] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/WALs/test.com,8080,1/test.com%2C8080%2C1.1685718108817 2023-06-02 15:01:48,822 DEBUG [Listener at localhost/35309] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44331,DS-f057a452-7c22-414d-ae08-d353784271ca,DISK], DatanodeInfoWithStorage[127.0.0.1:42985,DS-a32b75e4-c8a5-4af4-a2d0-3ee22d3abca8,DISK]] 2023-06-02 15:01:48,828 INFO [Listener at localhost/35309] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/WALs/test.com,8080,1/test.com%2C8080%2C1.1685718108817 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/WALs/test.com,8080,1/test.com%2C8080%2C1.1685718108822 2023-06-02 15:01:48,828 DEBUG [Listener at localhost/35309] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44331,DS-f057a452-7c22-414d-ae08-d353784271ca,DISK], DatanodeInfoWithStorage[127.0.0.1:42985,DS-a32b75e4-c8a5-4af4-a2d0-3ee22d3abca8,DISK]] 2023-06-02 15:01:48,828 DEBUG [Listener at localhost/35309] wal.AbstractFSWAL(716): hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/WALs/test.com,8080,1/test.com%2C8080%2C1.1685718108817 is not closed yet, will try archiving it next time 2023-06-02 15:01:48,829 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/WALs/test.com,8080,1 2023-06-02 15:01:48,837 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/WALs/test.com,8080,1/test.com%2C8080%2C1.1685718108817 to hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/oldWALs/test.com%2C8080%2C1.1685718108817 2023-06-02 15:01:48,839 DEBUG [Listener at localhost/35309] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/oldWALs 2023-06-02 15:01:48,839 INFO [Listener at localhost/35309] wal.AbstractFSWAL(1031): Closed WAL: FSHLog test.com%2C8080%2C1:(num 1685718108822) 2023-06-02 15:01:48,839 INFO [Listener at localhost/35309] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-02 15:01:48,840 DEBUG [Listener at localhost/35309] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x19c4aa11 to 127.0.0.1:64448 2023-06-02 15:01:48,840 DEBUG [Listener at localhost/35309] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 15:01:48,841 DEBUG [Listener at localhost/35309] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-02 15:01:48,841 DEBUG [Listener at localhost/35309] util.JVMClusterUtil(257): Found active master hash=1312874529, stopped=false 2023-06-02 15:01:48,841 INFO [Listener at localhost/35309] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,38701,1685718107739 2023-06-02 15:01:48,847 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): regionserver:33137-0x1008c0f894e0001, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-02 15:01:48,847 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-02 15:01:48,847 INFO [Listener at localhost/35309] procedure2.ProcedureExecutor(629): Stopping 2023-06-02 15:01:48,847 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 15:01:48,848 DEBUG [Listener at localhost/35309] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x030103b9 to 127.0.0.1:64448 2023-06-02 15:01:48,849 DEBUG [Listener at localhost/35309] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 15:01:48,849 INFO [Listener at localhost/35309] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,33137,1685718107779' ***** 2023-06-02 15:01:48,849 INFO [Listener at localhost/35309] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-02 15:01:48,848 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 15:01:48,849 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33137-0x1008c0f894e0001, quorum=127.0.0.1:64448, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-02 15:01:48,849 INFO [RS:0;jenkins-hbase4:33137] regionserver.HeapMemoryManager(220): Stopping 2023-06-02 15:01:48,849 INFO [RS:0;jenkins-hbase4:33137] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-02 15:01:48,849 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-02 15:01:48,849 INFO [RS:0;jenkins-hbase4:33137] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-02 15:01:48,850 INFO [RS:0;jenkins-hbase4:33137] regionserver.HRegionServer(3303): Received CLOSE for 17611bb296cebb14e685d401dffa2ef1 2023-06-02 15:01:48,850 INFO [RS:0;jenkins-hbase4:33137] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33137,1685718107779 2023-06-02 15:01:48,850 DEBUG [RS:0;jenkins-hbase4:33137] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x653d4177 to 127.0.0.1:64448 2023-06-02 15:01:48,850 DEBUG [RS:0;jenkins-hbase4:33137] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 15:01:48,850 INFO [RS:0;jenkins-hbase4:33137] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-02 15:01:48,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 17611bb296cebb14e685d401dffa2ef1, disabling compactions & flushes 2023-06-02 15:01:48,851 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685718108301.17611bb296cebb14e685d401dffa2ef1. 2023-06-02 15:01:48,851 INFO [RS:0;jenkins-hbase4:33137] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-02 15:01:48,851 INFO [RS:0;jenkins-hbase4:33137] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-02 15:01:48,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685718108301.17611bb296cebb14e685d401dffa2ef1. 2023-06-02 15:01:48,851 INFO [RS:0;jenkins-hbase4:33137] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-02 15:01:48,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685718108301.17611bb296cebb14e685d401dffa2ef1. after waiting 0 ms 2023-06-02 15:01:48,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685718108301.17611bb296cebb14e685d401dffa2ef1. 2023-06-02 15:01:48,851 INFO [RS:0;jenkins-hbase4:33137] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-06-02 15:01:48,851 DEBUG [RS:0;jenkins-hbase4:33137] regionserver.HRegionServer(1478): Online Regions={17611bb296cebb14e685d401dffa2ef1=hbase:namespace,,1685718108301.17611bb296cebb14e685d401dffa2ef1., 1588230740=hbase:meta,,1.1588230740} 2023-06-02 15:01:48,851 DEBUG [RS:0;jenkins-hbase4:33137] regionserver.HRegionServer(1504): Waiting on 1588230740, 17611bb296cebb14e685d401dffa2ef1 2023-06-02 15:01:48,851 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 17611bb296cebb14e685d401dffa2ef1 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-02 15:01:48,854 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-02 15:01:48,854 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-02 15:01:48,854 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-02 15:01:48,854 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-02 15:01:48,854 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-02 15:01:48,855 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=1.26 KB heapSize=2.89 KB 2023-06-02 15:01:48,870 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.17 KB at sequenceid=9 (bloomFilter=false), to=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/1588230740/.tmp/info/ea81c81943584c1aa52600b864cbd69c 2023-06-02 15:01:48,871 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/namespace/17611bb296cebb14e685d401dffa2ef1/.tmp/info/3d65043235414282976f1f3d18457c92 2023-06-02 15:01:48,877 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/namespace/17611bb296cebb14e685d401dffa2ef1/.tmp/info/3d65043235414282976f1f3d18457c92 as hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/namespace/17611bb296cebb14e685d401dffa2ef1/info/3d65043235414282976f1f3d18457c92 2023-06-02 15:01:48,882 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/namespace/17611bb296cebb14e685d401dffa2ef1/info/3d65043235414282976f1f3d18457c92, entries=2, sequenceid=6, filesize=4.8 K 2023-06-02 15:01:48,883 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 17611bb296cebb14e685d401dffa2ef1 in 32ms, sequenceid=6, compaction requested=false 2023-06-02 15:01:48,883 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-02 15:01:48,886 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=94 B at sequenceid=9 (bloomFilter=false), to=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/1588230740/.tmp/table/55528e4874b84a559624fc0659fed549 2023-06-02 15:01:48,889 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/namespace/17611bb296cebb14e685d401dffa2ef1/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-02 15:01:48,889 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685718108301.17611bb296cebb14e685d401dffa2ef1. 2023-06-02 15:01:48,889 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 17611bb296cebb14e685d401dffa2ef1: 2023-06-02 15:01:48,890 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685718108301.17611bb296cebb14e685d401dffa2ef1. 2023-06-02 15:01:48,891 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/1588230740/.tmp/info/ea81c81943584c1aa52600b864cbd69c as hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/1588230740/info/ea81c81943584c1aa52600b864cbd69c 2023-06-02 15:01:48,896 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/1588230740/info/ea81c81943584c1aa52600b864cbd69c, entries=10, sequenceid=9, filesize=5.9 K 2023-06-02 15:01:48,897 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/1588230740/.tmp/table/55528e4874b84a559624fc0659fed549 as hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/1588230740/table/55528e4874b84a559624fc0659fed549 2023-06-02 15:01:48,901 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/1588230740/table/55528e4874b84a559624fc0659fed549, entries=2, sequenceid=9, filesize=4.7 K 2023-06-02 15:01:48,901 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.26 KB/1290, heapSize ~2.61 KB/2672, currentSize=0 B/0 for 1588230740 in 47ms, sequenceid=9, compaction requested=false 2023-06-02 15:01:48,902 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-02 15:01:48,907 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/data/hbase/meta/1588230740/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-06-02 15:01:48,908 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-02 15:01:48,908 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-02 15:01:48,908 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-02 15:01:48,908 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-02 15:01:49,041 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-06-02 15:01:49,041 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-06-02 15:01:49,052 INFO [RS:0;jenkins-hbase4:33137] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33137,1685718107779; all regions closed. 2023-06-02 15:01:49,052 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/WALs/jenkins-hbase4.apache.org,33137,1685718107779 2023-06-02 15:01:49,057 DEBUG [RS:0;jenkins-hbase4:33137] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/oldWALs 2023-06-02 15:01:49,057 INFO [RS:0;jenkins-hbase4:33137] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C33137%2C1685718107779.meta:.meta(num 1685718108255) 2023-06-02 15:01:49,057 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/WALs/jenkins-hbase4.apache.org,33137,1685718107779 2023-06-02 15:01:49,061 DEBUG [RS:0;jenkins-hbase4:33137] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/oldWALs 2023-06-02 15:01:49,061 INFO [RS:0;jenkins-hbase4:33137] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C33137%2C1685718107779:(num 1685718108161) 2023-06-02 15:01:49,061 DEBUG [RS:0;jenkins-hbase4:33137] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 15:01:49,061 INFO [RS:0;jenkins-hbase4:33137] regionserver.LeaseManager(133): Closed leases 2023-06-02 15:01:49,061 INFO [RS:0;jenkins-hbase4:33137] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-02 15:01:49,061 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-02 15:01:49,062 INFO [RS:0;jenkins-hbase4:33137] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33137 2023-06-02 15:01:49,065 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): regionserver:33137-0x1008c0f894e0001, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33137,1685718107779 2023-06-02 15:01:49,065 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-02 15:01:49,065 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): regionserver:33137-0x1008c0f894e0001, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-02 15:01:49,068 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33137,1685718107779] 2023-06-02 15:01:49,068 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33137,1685718107779; numProcessing=1 2023-06-02 15:01:49,069 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33137,1685718107779 already deleted, retry=false 2023-06-02 15:01:49,069 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33137,1685718107779 expired; onlineServers=0 2023-06-02 15:01:49,069 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,38701,1685718107739' ***** 2023-06-02 15:01:49,069 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-02 15:01:49,069 DEBUG [M:0;jenkins-hbase4:38701] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4a7f6d31, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-06-02 15:01:49,070 INFO [M:0;jenkins-hbase4:38701] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38701,1685718107739 2023-06-02 15:01:49,070 INFO [M:0;jenkins-hbase4:38701] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38701,1685718107739; all regions closed. 2023-06-02 15:01:49,070 DEBUG [M:0;jenkins-hbase4:38701] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-02 15:01:49,070 DEBUG [M:0;jenkins-hbase4:38701] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-02 15:01:49,070 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-02 15:01:49,070 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685718107909] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685718107909,5,FailOnTimeoutGroup] 2023-06-02 15:01:49,070 DEBUG [M:0;jenkins-hbase4:38701] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-02 15:01:49,070 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685718107909] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685718107909,5,FailOnTimeoutGroup] 2023-06-02 15:01:49,071 INFO [M:0;jenkins-hbase4:38701] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-02 15:01:49,071 INFO [M:0;jenkins-hbase4:38701] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-02 15:01:49,071 INFO [M:0;jenkins-hbase4:38701] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-06-02 15:01:49,071 DEBUG [M:0;jenkins-hbase4:38701] master.HMaster(1512): Stopping service threads 2023-06-02 15:01:49,071 INFO [M:0;jenkins-hbase4:38701] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-02 15:01:49,072 ERROR [M:0;jenkins-hbase4:38701] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-11,5,PEWorkerGroup] 2023-06-02 15:01:49,072 INFO [M:0;jenkins-hbase4:38701] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-02 15:01:49,072 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-02 15:01:49,072 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-02 15:01:49,072 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-02 15:01:49,072 DEBUG [M:0;jenkins-hbase4:38701] zookeeper.ZKUtil(398): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-02 15:01:49,072 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-02 15:01:49,072 WARN [M:0;jenkins-hbase4:38701] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-02 15:01:49,072 INFO [M:0;jenkins-hbase4:38701] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-02 15:01:49,073 INFO [M:0;jenkins-hbase4:38701] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-02 15:01:49,073 DEBUG [M:0;jenkins-hbase4:38701] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-02 15:01:49,073 INFO [M:0;jenkins-hbase4:38701] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 15:01:49,073 DEBUG [M:0;jenkins-hbase4:38701] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 15:01:49,073 DEBUG [M:0;jenkins-hbase4:38701] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-02 15:01:49,073 DEBUG [M:0;jenkins-hbase4:38701] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 15:01:49,073 INFO [M:0;jenkins-hbase4:38701] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=24.07 KB heapSize=29.55 KB 2023-06-02 15:01:49,082 INFO [M:0;jenkins-hbase4:38701] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.07 KB at sequenceid=66 (bloomFilter=true), to=hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/dae09e969bf94ea696668eaae815d839 2023-06-02 15:01:49,087 DEBUG [M:0;jenkins-hbase4:38701] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/dae09e969bf94ea696668eaae815d839 as hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/dae09e969bf94ea696668eaae815d839 2023-06-02 15:01:49,090 INFO [M:0;jenkins-hbase4:38701] regionserver.HStore(1080): Added hdfs://localhost:46049/user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/dae09e969bf94ea696668eaae815d839, entries=8, sequenceid=66, filesize=6.3 K 2023-06-02 15:01:49,092 INFO [M:0;jenkins-hbase4:38701] regionserver.HRegion(2948): Finished flush of dataSize ~24.07 KB/24646, heapSize ~29.54 KB/30248, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 19ms, sequenceid=66, compaction requested=false 2023-06-02 15:01:49,093 INFO [M:0;jenkins-hbase4:38701] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-02 15:01:49,093 DEBUG [M:0;jenkins-hbase4:38701] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-02 15:01:49,094 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/f962a373-b2ef-c4c6-142c-c39af3653349/MasterData/WALs/jenkins-hbase4.apache.org,38701,1685718107739 2023-06-02 15:01:49,096 INFO [M:0;jenkins-hbase4:38701] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-02 15:01:49,096 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-02 15:01:49,097 INFO [M:0;jenkins-hbase4:38701] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38701 2023-06-02 15:01:49,099 DEBUG [M:0;jenkins-hbase4:38701] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,38701,1685718107739 already deleted, retry=false 2023-06-02 15:01:49,243 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 15:01:49,243 INFO [M:0;jenkins-hbase4:38701] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38701,1685718107739; zookeeper connection closed. 2023-06-02 15:01:49,243 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): master:38701-0x1008c0f894e0000, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 15:01:49,343 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): regionserver:33137-0x1008c0f894e0001, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 15:01:49,343 DEBUG [Listener at localhost/35309-EventThread] zookeeper.ZKWatcher(600): regionserver:33137-0x1008c0f894e0001, quorum=127.0.0.1:64448, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-02 15:01:49,343 INFO [RS:0;jenkins-hbase4:33137] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33137,1685718107779; zookeeper connection closed. 2023-06-02 15:01:49,344 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@e91691b] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@e91691b 2023-06-02 15:01:49,344 INFO [Listener at localhost/35309] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-02 15:01:49,344 WARN [Listener at localhost/35309] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-02 15:01:49,348 INFO [Listener at localhost/35309] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 15:01:49,453 WARN [BP-1252707950-172.31.14.131-1685718107174 heartbeating to localhost/127.0.0.1:46049] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-02 15:01:49,453 WARN [BP-1252707950-172.31.14.131-1685718107174 heartbeating to localhost/127.0.0.1:46049] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1252707950-172.31.14.131-1685718107174 (Datanode Uuid d059d7e9-85b5-4aac-9f64-5e6e96861706) service to localhost/127.0.0.1:46049 2023-06-02 15:01:49,453 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/cluster_c3b0d7b0-8d07-6cc9-95dc-14144e3726e6/dfs/data/data3/current/BP-1252707950-172.31.14.131-1685718107174] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 15:01:49,454 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/cluster_c3b0d7b0-8d07-6cc9-95dc-14144e3726e6/dfs/data/data4/current/BP-1252707950-172.31.14.131-1685718107174] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 15:01:49,454 WARN [Listener at localhost/35309] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-02 15:01:49,460 INFO [Listener at localhost/35309] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 15:01:49,562 WARN [BP-1252707950-172.31.14.131-1685718107174 heartbeating to localhost/127.0.0.1:46049] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-02 15:01:49,562 WARN [BP-1252707950-172.31.14.131-1685718107174 heartbeating to localhost/127.0.0.1:46049] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1252707950-172.31.14.131-1685718107174 (Datanode Uuid a7dfcdc7-7a8d-4c0f-81ac-ab2bfb95976e) service to localhost/127.0.0.1:46049 2023-06-02 15:01:49,563 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/cluster_c3b0d7b0-8d07-6cc9-95dc-14144e3726e6/dfs/data/data1/current/BP-1252707950-172.31.14.131-1685718107174] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 15:01:49,563 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a70605fe-667e-3ee1-1e47-4dc3809a8863/cluster_c3b0d7b0-8d07-6cc9-95dc-14144e3726e6/dfs/data/data2/current/BP-1252707950-172.31.14.131-1685718107174] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-02 15:01:49,572 INFO [Listener at localhost/35309] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-02 15:01:49,682 INFO [Listener at localhost/35309] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-02 15:01:49,692 INFO [Listener at localhost/35309] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-02 15:01:49,703 INFO [Listener at localhost/35309] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=130 (was 106) - Thread LEAK? -, OpenFileDescriptor=560 (was 533) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=25 (was 27), ProcessCount=170 (was 170), AvailableMemoryMB=309 (was 304) - AvailableMemoryMB LEAK? -