2023-06-06 18:52:43,425 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e 2023-06-06 18:52:43,437 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.regionserver.wal.TestLogRolling timeout: 13 mins 2023-06-06 18:52:43,467 INFO [Time-limited test] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=10, OpenFileDescriptor=263, MaxFileDescriptor=60000, SystemLoadAverage=324, ProcessCount=169, AvailableMemoryMB=7126 2023-06-06 18:52:43,472 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-06 18:52:43,473 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/cluster_9d874f28-a384-ff7f-6af1-056978b59f8c, deleteOnExit=true 2023-06-06 18:52:43,473 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-06 18:52:43,474 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/test.cache.data in system properties and HBase conf 2023-06-06 18:52:43,474 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/hadoop.tmp.dir in system properties and HBase conf 2023-06-06 18:52:43,474 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/hadoop.log.dir in system properties and HBase conf 2023-06-06 18:52:43,475 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-06 18:52:43,475 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-06 18:52:43,476 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-06 18:52:43,583 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-06-06 18:52:43,934 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-06 18:52:43,937 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-06 18:52:43,937 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-06 18:52:43,938 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-06 18:52:43,938 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-06 18:52:43,938 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-06 18:52:43,939 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-06 18:52:43,939 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-06 18:52:43,940 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-06 18:52:43,940 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-06 18:52:43,940 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/nfs.dump.dir in system properties and HBase conf 2023-06-06 18:52:43,941 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/java.io.tmpdir in system properties and HBase conf 2023-06-06 18:52:43,941 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-06 18:52:43,941 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-06 18:52:43,941 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-06 18:52:44,425 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-06 18:52:44,440 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-06 18:52:44,445 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-06 18:52:44,693 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-06-06 18:52:44,852 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-06-06 18:52:44,869 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:52:44,905 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:52:44,938 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/java.io.tmpdir/Jetty_localhost_localdomain_36473_hdfs____.pf48k9/webapp 2023-06-06 18:52:45,107 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:36473 2023-06-06 18:52:45,116 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-06 18:52:45,118 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-06 18:52:45,118 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-06 18:52:45,523 WARN [Listener at localhost.localdomain/34031] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:52:45,584 WARN [Listener at localhost.localdomain/34031] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-06 18:52:45,600 WARN [Listener at localhost.localdomain/34031] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:52:45,606 INFO [Listener at localhost.localdomain/34031] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:52:45,610 INFO [Listener at localhost.localdomain/34031] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/java.io.tmpdir/Jetty_localhost_40957_datanode____jp1k3q/webapp 2023-06-06 18:52:45,692 INFO [Listener at localhost.localdomain/34031] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40957 2023-06-06 18:52:45,954 WARN [Listener at localhost.localdomain/38475] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:52:45,966 WARN [Listener at localhost.localdomain/38475] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-06 18:52:45,969 WARN [Listener at localhost.localdomain/38475] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:52:45,971 INFO [Listener at localhost.localdomain/38475] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:52:45,977 INFO [Listener at localhost.localdomain/38475] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/java.io.tmpdir/Jetty_localhost_37077_datanode____.iryc4t/webapp 2023-06-06 18:52:46,055 INFO [Listener at localhost.localdomain/38475] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37077 2023-06-06 18:52:46,063 WARN [Listener at localhost.localdomain/40767] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:52:46,799 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x901463b3383b5a64: Processing first storage report for DS-064463e6-2bbb-4a2e-a0c3-4ad385b0c3d5 from datanode e26f4d78-5dee-4f66-aa6f-dfdb0475f216 2023-06-06 18:52:46,800 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x901463b3383b5a64: from storage DS-064463e6-2bbb-4a2e-a0c3-4ad385b0c3d5 node DatanodeRegistration(127.0.0.1:44581, datanodeUuid=e26f4d78-5dee-4f66-aa6f-dfdb0475f216, infoPort=40155, infoSecurePort=0, ipcPort=38475, storageInfo=lv=-57;cid=testClusterID;nsid=1518151956;c=1686077564510), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-06-06 18:52:46,801 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc7ce800fb40855a1: Processing first storage report for DS-34cba47a-98c0-4e64-aba4-d51c9ebf9445 from datanode ed1c069d-7f1b-45c7-adf9-82a007441050 2023-06-06 18:52:46,801 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc7ce800fb40855a1: from storage DS-34cba47a-98c0-4e64-aba4-d51c9ebf9445 node DatanodeRegistration(127.0.0.1:39995, datanodeUuid=ed1c069d-7f1b-45c7-adf9-82a007441050, infoPort=43521, infoSecurePort=0, ipcPort=40767, storageInfo=lv=-57;cid=testClusterID;nsid=1518151956;c=1686077564510), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:52:46,801 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x901463b3383b5a64: Processing first storage report for DS-c5e2bca5-9d1d-4b99-a67c-5f10dc23fe76 from datanode e26f4d78-5dee-4f66-aa6f-dfdb0475f216 2023-06-06 18:52:46,801 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x901463b3383b5a64: from storage DS-c5e2bca5-9d1d-4b99-a67c-5f10dc23fe76 node DatanodeRegistration(127.0.0.1:44581, datanodeUuid=e26f4d78-5dee-4f66-aa6f-dfdb0475f216, infoPort=40155, infoSecurePort=0, ipcPort=38475, storageInfo=lv=-57;cid=testClusterID;nsid=1518151956;c=1686077564510), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:52:46,801 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc7ce800fb40855a1: Processing first storage report for DS-6a04f5ed-7695-4f44-8dcf-3a8acda03c12 from datanode ed1c069d-7f1b-45c7-adf9-82a007441050 2023-06-06 18:52:46,801 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc7ce800fb40855a1: from storage DS-6a04f5ed-7695-4f44-8dcf-3a8acda03c12 node DatanodeRegistration(127.0.0.1:39995, datanodeUuid=ed1c069d-7f1b-45c7-adf9-82a007441050, infoPort=43521, infoSecurePort=0, ipcPort=40767, storageInfo=lv=-57;cid=testClusterID;nsid=1518151956;c=1686077564510), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:52:46,884 DEBUG [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e 2023-06-06 18:52:46,941 INFO [Listener at localhost.localdomain/40767] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/cluster_9d874f28-a384-ff7f-6af1-056978b59f8c/zookeeper_0, clientPort=63828, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/cluster_9d874f28-a384-ff7f-6af1-056978b59f8c/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/cluster_9d874f28-a384-ff7f-6af1-056978b59f8c/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-06 18:52:46,954 INFO [Listener at localhost.localdomain/40767] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=63828 2023-06-06 18:52:46,961 INFO [Listener at localhost.localdomain/40767] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:52:46,963 INFO [Listener at localhost.localdomain/40767] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:52:47,596 INFO [Listener at localhost.localdomain/40767] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf with version=8 2023-06-06 18:52:47,597 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/hbase-staging 2023-06-06 18:52:47,837 INFO [Listener at localhost.localdomain/40767] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-06-06 18:52:48,184 INFO [Listener at localhost.localdomain/40767] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-06-06 18:52:48,209 INFO [Listener at localhost.localdomain/40767] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:52:48,209 INFO [Listener at localhost.localdomain/40767] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-06 18:52:48,209 INFO [Listener at localhost.localdomain/40767] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-06 18:52:48,210 INFO [Listener at localhost.localdomain/40767] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:52:48,210 INFO [Listener at localhost.localdomain/40767] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-06 18:52:48,321 INFO [Listener at localhost.localdomain/40767] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-06 18:52:48,387 DEBUG [Listener at localhost.localdomain/40767] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-06-06 18:52:48,460 INFO [Listener at localhost.localdomain/40767] ipc.NettyRpcServer(120): Bind to /148.251.75.209:45465 2023-06-06 18:52:48,469 INFO [Listener at localhost.localdomain/40767] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:52:48,471 INFO [Listener at localhost.localdomain/40767] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:52:48,493 INFO [Listener at localhost.localdomain/40767] zookeeper.RecoverableZooKeeper(93): Process identifier=master:45465 connecting to ZooKeeper ensemble=127.0.0.1:63828 2023-06-06 18:52:48,525 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:454650x0, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-06 18:52:48,528 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:45465-0x101c1c407fc0000 connected 2023-06-06 18:52:48,553 DEBUG [Listener at localhost.localdomain/40767] zookeeper.ZKUtil(164): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-06 18:52:48,554 DEBUG [Listener at localhost.localdomain/40767] zookeeper.ZKUtil(164): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:52:48,558 DEBUG [Listener at localhost.localdomain/40767] zookeeper.ZKUtil(164): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-06 18:52:48,566 DEBUG [Listener at localhost.localdomain/40767] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45465 2023-06-06 18:52:48,566 DEBUG [Listener at localhost.localdomain/40767] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45465 2023-06-06 18:52:48,567 DEBUG [Listener at localhost.localdomain/40767] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45465 2023-06-06 18:52:48,567 DEBUG [Listener at localhost.localdomain/40767] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45465 2023-06-06 18:52:48,569 DEBUG [Listener at localhost.localdomain/40767] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45465 2023-06-06 18:52:48,574 INFO [Listener at localhost.localdomain/40767] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf, hbase.cluster.distributed=false 2023-06-06 18:52:48,630 INFO [Listener at localhost.localdomain/40767] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-06-06 18:52:48,630 INFO [Listener at localhost.localdomain/40767] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:52:48,630 INFO [Listener at localhost.localdomain/40767] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-06 18:52:48,630 INFO [Listener at localhost.localdomain/40767] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-06 18:52:48,630 INFO [Listener at localhost.localdomain/40767] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:52:48,631 INFO [Listener at localhost.localdomain/40767] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-06 18:52:48,634 INFO [Listener at localhost.localdomain/40767] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-06 18:52:48,637 INFO [Listener at localhost.localdomain/40767] ipc.NettyRpcServer(120): Bind to /148.251.75.209:43601 2023-06-06 18:52:48,639 INFO [Listener at localhost.localdomain/40767] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-06 18:52:48,644 DEBUG [Listener at localhost.localdomain/40767] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-06 18:52:48,645 INFO [Listener at localhost.localdomain/40767] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:52:48,647 INFO [Listener at localhost.localdomain/40767] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:52:48,648 INFO [Listener at localhost.localdomain/40767] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43601 connecting to ZooKeeper ensemble=127.0.0.1:63828 2023-06-06 18:52:48,651 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): regionserver:436010x0, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-06 18:52:48,652 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43601-0x101c1c407fc0001 connected 2023-06-06 18:52:48,652 DEBUG [Listener at localhost.localdomain/40767] zookeeper.ZKUtil(164): regionserver:43601-0x101c1c407fc0001, quorum=127.0.0.1:63828, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-06 18:52:48,653 DEBUG [Listener at localhost.localdomain/40767] zookeeper.ZKUtil(164): regionserver:43601-0x101c1c407fc0001, quorum=127.0.0.1:63828, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:52:48,654 DEBUG [Listener at localhost.localdomain/40767] zookeeper.ZKUtil(164): regionserver:43601-0x101c1c407fc0001, quorum=127.0.0.1:63828, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-06 18:52:48,654 DEBUG [Listener at localhost.localdomain/40767] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43601 2023-06-06 18:52:48,655 DEBUG [Listener at localhost.localdomain/40767] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43601 2023-06-06 18:52:48,656 DEBUG [Listener at localhost.localdomain/40767] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43601 2023-06-06 18:52:48,656 DEBUG [Listener at localhost.localdomain/40767] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43601 2023-06-06 18:52:48,656 DEBUG [Listener at localhost.localdomain/40767] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43601 2023-06-06 18:52:48,658 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,45465,1686077567708 2023-06-06 18:52:48,668 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-06 18:52:48,670 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,45465,1686077567708 2023-06-06 18:52:48,689 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-06 18:52:48,689 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): regionserver:43601-0x101c1c407fc0001, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-06 18:52:48,689 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:52:48,690 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-06 18:52:48,692 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,45465,1686077567708 from backup master directory 2023-06-06 18:52:48,692 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-06 18:52:48,694 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,45465,1686077567708 2023-06-06 18:52:48,694 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-06 18:52:48,695 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-06 18:52:48,695 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,45465,1686077567708 2023-06-06 18:52:48,698 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-06-06 18:52:48,699 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-06-06 18:52:48,783 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/hbase.id with ID: 5e96da7a-4f9a-4570-accb-072d6c8b3a95 2023-06-06 18:52:48,831 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:52:48,847 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:52:48,887 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x49e62556 to 127.0.0.1:63828 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-06 18:52:48,915 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@111c1645, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-06 18:52:48,934 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-06 18:52:48,935 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-06 18:52:48,942 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-06 18:52:48,970 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/MasterData/data/master/store-tmp 2023-06-06 18:52:48,998 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:52:48,998 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-06 18:52:48,998 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:52:48,998 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:52:48,998 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-06 18:52:48,999 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:52:48,999 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:52:48,999 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-06 18:52:49,000 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/MasterData/WALs/jenkins-hbase20.apache.org,45465,1686077567708 2023-06-06 18:52:49,020 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C45465%2C1686077567708, suffix=, logDir=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/MasterData/WALs/jenkins-hbase20.apache.org,45465,1686077567708, archiveDir=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/MasterData/oldWALs, maxLogs=10 2023-06-06 18:52:49,037 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.CommonFSUtils$DfsBuilderUtility(753): Could not find replicate method on builder; will not set replicate when creating output stream java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DistributedFileSystem$HdfsDataOutputStreamBuilder.replicate() at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.util.CommonFSUtils$DfsBuilderUtility.(CommonFSUtils.java:750) at org.apache.hadoop.hbase.util.CommonFSUtils.createForWal(CommonFSUtils.java:802) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.initOutput(ProtobufLogWriter.java:102) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:160) at org.apache.hadoop.hbase.wal.FSHLogProvider.createWriter(FSHLogProvider.java:78) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:307) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:881) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:574) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:515) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62) at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:295) at org.apache.hadoop.hbase.master.region.MasterRegion.createWAL(MasterRegion.java:200) at org.apache.hadoop.hbase.master.region.MasterRegion.bootstrap(MasterRegion.java:220) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:348) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:52:49,059 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/MasterData/WALs/jenkins-hbase20.apache.org,45465,1686077567708/jenkins-hbase20.apache.org%2C45465%2C1686077567708.1686077569036 2023-06-06 18:52:49,059 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39995,DS-34cba47a-98c0-4e64-aba4-d51c9ebf9445,DISK], DatanodeInfoWithStorage[127.0.0.1:44581,DS-064463e6-2bbb-4a2e-a0c3-4ad385b0c3d5,DISK]] 2023-06-06 18:52:49,060 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-06 18:52:49,060 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:52:49,064 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:52:49,065 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:52:49,114 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:52:49,121 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-06 18:52:49,142 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-06 18:52:49,155 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:52:49,160 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:52:49,162 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:52:49,179 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:52:49,183 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-06 18:52:49,184 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=769364, jitterRate=-0.021703943610191345}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-06 18:52:49,184 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-06 18:52:49,186 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-06 18:52:49,202 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-06 18:52:49,202 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-06 18:52:49,205 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-06 18:52:49,206 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-06-06 18:52:49,236 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 28 msec 2023-06-06 18:52:49,236 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-06 18:52:49,259 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-06 18:52:49,264 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-06 18:52:49,288 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-06 18:52:49,291 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-06 18:52:49,293 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-06 18:52:49,297 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-06 18:52:49,301 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-06 18:52:49,304 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:52:49,305 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-06 18:52:49,306 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-06 18:52:49,316 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-06 18:52:49,320 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-06 18:52:49,320 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): regionserver:43601-0x101c1c407fc0001, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-06 18:52:49,320 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:52:49,321 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,45465,1686077567708, sessionid=0x101c1c407fc0000, setting cluster-up flag (Was=false) 2023-06-06 18:52:49,335 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:52:49,339 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-06 18:52:49,341 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,45465,1686077567708 2023-06-06 18:52:49,355 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:52:49,360 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-06 18:52:49,362 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,45465,1686077567708 2023-06-06 18:52:49,365 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/.hbase-snapshot/.tmp 2023-06-06 18:52:49,460 INFO [RS:0;jenkins-hbase20:43601] regionserver.HRegionServer(951): ClusterId : 5e96da7a-4f9a-4570-accb-072d6c8b3a95 2023-06-06 18:52:49,462 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-06 18:52:49,464 DEBUG [RS:0;jenkins-hbase20:43601] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-06 18:52:49,469 DEBUG [RS:0;jenkins-hbase20:43601] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-06 18:52:49,469 DEBUG [RS:0;jenkins-hbase20:43601] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-06 18:52:49,472 DEBUG [RS:0;jenkins-hbase20:43601] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-06 18:52:49,472 DEBUG [RS:0;jenkins-hbase20:43601] zookeeper.ReadOnlyZKClient(139): Connect 0x3acf9ad3 to 127.0.0.1:63828 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-06 18:52:49,474 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-06 18:52:49,474 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-06 18:52:49,474 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-06 18:52:49,475 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-06 18:52:49,475 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-06-06 18:52:49,475 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:52:49,475 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-06 18:52:49,475 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:52:49,477 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1686077599477 2023-06-06 18:52:49,479 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-06 18:52:49,481 DEBUG [RS:0;jenkins-hbase20:43601] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@340873fa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-06 18:52:49,482 DEBUG [RS:0;jenkins-hbase20:43601] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3037f1a3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-06 18:52:49,484 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-06 18:52:49,484 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-06 18:52:49,490 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-06 18:52:49,490 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-06 18:52:49,498 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-06 18:52:49,498 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-06 18:52:49,499 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-06 18:52:49,499 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-06 18:52:49,500 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-06 18:52:49,501 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-06 18:52:49,503 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-06 18:52:49,503 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-06 18:52:49,506 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-06 18:52:49,507 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-06 18:52:49,509 DEBUG [RS:0;jenkins-hbase20:43601] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:43601 2023-06-06 18:52:49,510 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686077569509,5,FailOnTimeoutGroup] 2023-06-06 18:52:49,511 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686077569510,5,FailOnTimeoutGroup] 2023-06-06 18:52:49,511 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-06 18:52:49,511 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-06 18:52:49,513 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-06 18:52:49,514 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-06 18:52:49,516 INFO [RS:0;jenkins-hbase20:43601] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-06 18:52:49,517 INFO [RS:0;jenkins-hbase20:43601] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-06 18:52:49,517 DEBUG [RS:0;jenkins-hbase20:43601] regionserver.HRegionServer(1022): About to register with Master. 2023-06-06 18:52:49,525 INFO [RS:0;jenkins-hbase20:43601] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,45465,1686077567708 with isa=jenkins-hbase20.apache.org/148.251.75.209:43601, startcode=1686077568629 2023-06-06 18:52:49,532 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-06 18:52:49,533 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-06 18:52:49,533 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf 2023-06-06 18:52:49,550 DEBUG [RS:0;jenkins-hbase20:43601] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-06 18:52:49,555 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:52:49,559 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-06 18:52:49,562 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/1588230740/info 2023-06-06 18:52:49,563 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-06 18:52:49,565 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:52:49,565 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-06 18:52:49,568 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/1588230740/rep_barrier 2023-06-06 18:52:49,569 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-06 18:52:49,571 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:52:49,571 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-06 18:52:49,573 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/1588230740/table 2023-06-06 18:52:49,574 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-06 18:52:49,575 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:52:49,577 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/1588230740 2023-06-06 18:52:49,579 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/1588230740 2023-06-06 18:52:49,583 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-06 18:52:49,585 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-06 18:52:49,589 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-06 18:52:49,590 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=813565, jitterRate=0.03450216352939606}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-06 18:52:49,590 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-06 18:52:49,591 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-06 18:52:49,591 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-06 18:52:49,591 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-06 18:52:49,591 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-06 18:52:49,591 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-06 18:52:49,592 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-06 18:52:49,592 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-06 18:52:49,598 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-06 18:52:49,599 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-06 18:52:49,607 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-06 18:52:49,619 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-06 18:52:49,622 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-06 18:52:49,662 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:59853, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-06-06 18:52:49,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45465] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,43601,1686077568629 2023-06-06 18:52:49,684 DEBUG [RS:0;jenkins-hbase20:43601] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf 2023-06-06 18:52:49,685 DEBUG [RS:0;jenkins-hbase20:43601] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:34031 2023-06-06 18:52:49,685 DEBUG [RS:0;jenkins-hbase20:43601] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-06 18:52:49,689 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-06 18:52:49,690 DEBUG [RS:0;jenkins-hbase20:43601] zookeeper.ZKUtil(162): regionserver:43601-0x101c1c407fc0001, quorum=127.0.0.1:63828, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43601,1686077568629 2023-06-06 18:52:49,690 WARN [RS:0;jenkins-hbase20:43601] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-06 18:52:49,690 INFO [RS:0;jenkins-hbase20:43601] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-06 18:52:49,691 DEBUG [RS:0;jenkins-hbase20:43601] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/WALs/jenkins-hbase20.apache.org,43601,1686077568629 2023-06-06 18:52:49,693 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,43601,1686077568629] 2023-06-06 18:52:49,699 DEBUG [RS:0;jenkins-hbase20:43601] zookeeper.ZKUtil(162): regionserver:43601-0x101c1c407fc0001, quorum=127.0.0.1:63828, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43601,1686077568629 2023-06-06 18:52:49,709 DEBUG [RS:0;jenkins-hbase20:43601] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-06 18:52:49,716 INFO [RS:0;jenkins-hbase20:43601] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-06 18:52:49,734 INFO [RS:0;jenkins-hbase20:43601] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-06 18:52:49,736 INFO [RS:0;jenkins-hbase20:43601] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-06 18:52:49,736 INFO [RS:0;jenkins-hbase20:43601] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-06 18:52:49,737 INFO [RS:0;jenkins-hbase20:43601] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-06 18:52:49,743 INFO [RS:0;jenkins-hbase20:43601] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-06 18:52:49,743 DEBUG [RS:0;jenkins-hbase20:43601] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:52:49,744 DEBUG [RS:0;jenkins-hbase20:43601] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:52:49,744 DEBUG [RS:0;jenkins-hbase20:43601] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:52:49,744 DEBUG [RS:0;jenkins-hbase20:43601] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:52:49,744 DEBUG [RS:0;jenkins-hbase20:43601] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:52:49,744 DEBUG [RS:0;jenkins-hbase20:43601] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-06 18:52:49,744 DEBUG [RS:0;jenkins-hbase20:43601] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:52:49,744 DEBUG [RS:0;jenkins-hbase20:43601] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:52:49,745 DEBUG [RS:0;jenkins-hbase20:43601] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:52:49,745 DEBUG [RS:0;jenkins-hbase20:43601] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:52:49,746 INFO [RS:0;jenkins-hbase20:43601] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-06 18:52:49,746 INFO [RS:0;jenkins-hbase20:43601] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-06 18:52:49,746 INFO [RS:0;jenkins-hbase20:43601] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-06 18:52:49,759 INFO [RS:0;jenkins-hbase20:43601] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-06 18:52:49,761 INFO [RS:0;jenkins-hbase20:43601] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,43601,1686077568629-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-06 18:52:49,775 DEBUG [jenkins-hbase20:45465] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-06 18:52:49,777 INFO [RS:0;jenkins-hbase20:43601] regionserver.Replication(203): jenkins-hbase20.apache.org,43601,1686077568629 started 2023-06-06 18:52:49,777 INFO [RS:0;jenkins-hbase20:43601] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,43601,1686077568629, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:43601, sessionid=0x101c1c407fc0001 2023-06-06 18:52:49,777 DEBUG [RS:0;jenkins-hbase20:43601] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-06 18:52:49,778 DEBUG [RS:0;jenkins-hbase20:43601] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,43601,1686077568629 2023-06-06 18:52:49,778 DEBUG [RS:0;jenkins-hbase20:43601] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,43601,1686077568629' 2023-06-06 18:52:49,778 DEBUG [RS:0;jenkins-hbase20:43601] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-06 18:52:49,778 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,43601,1686077568629, state=OPENING 2023-06-06 18:52:49,779 DEBUG [RS:0;jenkins-hbase20:43601] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-06 18:52:49,779 DEBUG [RS:0;jenkins-hbase20:43601] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-06 18:52:49,779 DEBUG [RS:0;jenkins-hbase20:43601] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-06 18:52:49,779 DEBUG [RS:0;jenkins-hbase20:43601] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,43601,1686077568629 2023-06-06 18:52:49,779 DEBUG [RS:0;jenkins-hbase20:43601] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,43601,1686077568629' 2023-06-06 18:52:49,779 DEBUG [RS:0;jenkins-hbase20:43601] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-06 18:52:49,780 DEBUG [RS:0;jenkins-hbase20:43601] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-06 18:52:49,780 DEBUG [RS:0;jenkins-hbase20:43601] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-06 18:52:49,780 INFO [RS:0;jenkins-hbase20:43601] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-06 18:52:49,780 INFO [RS:0;jenkins-hbase20:43601] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-06 18:52:49,784 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-06 18:52:49,785 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:52:49,786 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-06 18:52:49,790 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,43601,1686077568629}] 2023-06-06 18:52:49,895 INFO [RS:0;jenkins-hbase20:43601] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C43601%2C1686077568629, suffix=, logDir=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/WALs/jenkins-hbase20.apache.org,43601,1686077568629, archiveDir=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/oldWALs, maxLogs=32 2023-06-06 18:52:49,911 INFO [RS:0;jenkins-hbase20:43601] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/WALs/jenkins-hbase20.apache.org,43601,1686077568629/jenkins-hbase20.apache.org%2C43601%2C1686077568629.1686077569898 2023-06-06 18:52:49,911 DEBUG [RS:0;jenkins-hbase20:43601] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39995,DS-34cba47a-98c0-4e64-aba4-d51c9ebf9445,DISK], DatanodeInfoWithStorage[127.0.0.1:44581,DS-064463e6-2bbb-4a2e-a0c3-4ad385b0c3d5,DISK]] 2023-06-06 18:52:49,975 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,43601,1686077568629 2023-06-06 18:52:49,978 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-06 18:52:49,981 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:44102, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-06 18:52:49,992 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-06 18:52:49,993 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-06 18:52:49,996 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C43601%2C1686077568629.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/WALs/jenkins-hbase20.apache.org,43601,1686077568629, archiveDir=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/oldWALs, maxLogs=32 2023-06-06 18:52:50,010 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/WALs/jenkins-hbase20.apache.org,43601,1686077568629/jenkins-hbase20.apache.org%2C43601%2C1686077568629.meta.1686077569998.meta 2023-06-06 18:52:50,011 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39995,DS-34cba47a-98c0-4e64-aba4-d51c9ebf9445,DISK], DatanodeInfoWithStorage[127.0.0.1:44581,DS-064463e6-2bbb-4a2e-a0c3-4ad385b0c3d5,DISK]] 2023-06-06 18:52:50,011 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-06 18:52:50,013 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-06 18:52:50,029 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-06 18:52:50,034 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-06 18:52:50,039 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-06 18:52:50,039 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:52:50,039 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-06 18:52:50,039 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-06 18:52:50,042 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-06 18:52:50,044 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/1588230740/info 2023-06-06 18:52:50,044 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/1588230740/info 2023-06-06 18:52:50,045 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-06 18:52:50,046 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:52:50,046 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-06 18:52:50,048 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/1588230740/rep_barrier 2023-06-06 18:52:50,048 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/1588230740/rep_barrier 2023-06-06 18:52:50,048 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-06 18:52:50,049 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:52:50,049 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-06 18:52:50,051 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/1588230740/table 2023-06-06 18:52:50,051 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/1588230740/table 2023-06-06 18:52:50,052 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-06 18:52:50,053 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:52:50,054 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/1588230740 2023-06-06 18:52:50,057 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/1588230740 2023-06-06 18:52:50,061 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-06 18:52:50,064 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-06 18:52:50,065 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=821341, jitterRate=0.04439003765583038}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-06 18:52:50,065 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-06 18:52:50,076 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1686077569969 2023-06-06 18:52:50,092 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-06 18:52:50,093 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-06 18:52:50,093 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,43601,1686077568629, state=OPEN 2023-06-06 18:52:50,096 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-06 18:52:50,096 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-06 18:52:50,102 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-06 18:52:50,102 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,43601,1686077568629 in 306 msec 2023-06-06 18:52:50,110 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-06 18:52:50,110 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 497 msec 2023-06-06 18:52:50,119 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 704 msec 2023-06-06 18:52:50,119 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1686077570119, completionTime=-1 2023-06-06 18:52:50,120 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-06 18:52:50,120 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-06 18:52:50,180 DEBUG [hconnection-0x29274a68-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-06 18:52:50,184 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:44106, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-06 18:52:50,198 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-06 18:52:50,198 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1686077630198 2023-06-06 18:52:50,199 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1686077690199 2023-06-06 18:52:50,199 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 78 msec 2023-06-06 18:52:50,223 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,45465,1686077567708-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-06 18:52:50,224 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,45465,1686077567708-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-06 18:52:50,224 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,45465,1686077567708-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-06 18:52:50,225 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:45465, period=300000, unit=MILLISECONDS is enabled. 2023-06-06 18:52:50,226 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-06 18:52:50,232 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-06 18:52:50,238 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-06 18:52:50,239 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-06 18:52:50,249 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-06 18:52:50,252 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-06 18:52:50,255 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-06 18:52:50,275 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/.tmp/data/hbase/namespace/8496f87023f6c85979bba9a69c134613 2023-06-06 18:52:50,277 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/.tmp/data/hbase/namespace/8496f87023f6c85979bba9a69c134613 empty. 2023-06-06 18:52:50,278 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/.tmp/data/hbase/namespace/8496f87023f6c85979bba9a69c134613 2023-06-06 18:52:50,278 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-06 18:52:50,305 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-06 18:52:50,308 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8496f87023f6c85979bba9a69c134613, NAME => 'hbase:namespace,,1686077570239.8496f87023f6c85979bba9a69c134613.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/.tmp 2023-06-06 18:52:50,326 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686077570239.8496f87023f6c85979bba9a69c134613.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:52:50,326 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 8496f87023f6c85979bba9a69c134613, disabling compactions & flushes 2023-06-06 18:52:50,326 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686077570239.8496f87023f6c85979bba9a69c134613. 2023-06-06 18:52:50,326 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686077570239.8496f87023f6c85979bba9a69c134613. 2023-06-06 18:52:50,326 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686077570239.8496f87023f6c85979bba9a69c134613. after waiting 0 ms 2023-06-06 18:52:50,326 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686077570239.8496f87023f6c85979bba9a69c134613. 2023-06-06 18:52:50,326 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686077570239.8496f87023f6c85979bba9a69c134613. 2023-06-06 18:52:50,326 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 8496f87023f6c85979bba9a69c134613: 2023-06-06 18:52:50,331 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-06 18:52:50,344 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1686077570239.8496f87023f6c85979bba9a69c134613.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686077570334"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686077570334"}]},"ts":"1686077570334"} 2023-06-06 18:52:50,368 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-06 18:52:50,370 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-06 18:52:50,374 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686077570370"}]},"ts":"1686077570370"} 2023-06-06 18:52:50,379 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-06 18:52:50,387 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8496f87023f6c85979bba9a69c134613, ASSIGN}] 2023-06-06 18:52:50,391 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8496f87023f6c85979bba9a69c134613, ASSIGN 2023-06-06 18:52:50,393 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=8496f87023f6c85979bba9a69c134613, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,43601,1686077568629; forceNewPlan=false, retain=false 2023-06-06 18:52:50,545 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=8496f87023f6c85979bba9a69c134613, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43601,1686077568629 2023-06-06 18:52:50,545 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1686077570239.8496f87023f6c85979bba9a69c134613.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686077570544"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686077570544"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686077570544"}]},"ts":"1686077570544"} 2023-06-06 18:52:50,556 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 8496f87023f6c85979bba9a69c134613, server=jenkins-hbase20.apache.org,43601,1686077568629}] 2023-06-06 18:52:50,724 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1686077570239.8496f87023f6c85979bba9a69c134613. 2023-06-06 18:52:50,725 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8496f87023f6c85979bba9a69c134613, NAME => 'hbase:namespace,,1686077570239.8496f87023f6c85979bba9a69c134613.', STARTKEY => '', ENDKEY => ''} 2023-06-06 18:52:50,726 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 8496f87023f6c85979bba9a69c134613 2023-06-06 18:52:50,727 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686077570239.8496f87023f6c85979bba9a69c134613.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:52:50,727 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 8496f87023f6c85979bba9a69c134613 2023-06-06 18:52:50,727 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 8496f87023f6c85979bba9a69c134613 2023-06-06 18:52:50,729 INFO [StoreOpener-8496f87023f6c85979bba9a69c134613-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 8496f87023f6c85979bba9a69c134613 2023-06-06 18:52:50,732 DEBUG [StoreOpener-8496f87023f6c85979bba9a69c134613-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/namespace/8496f87023f6c85979bba9a69c134613/info 2023-06-06 18:52:50,732 DEBUG [StoreOpener-8496f87023f6c85979bba9a69c134613-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/namespace/8496f87023f6c85979bba9a69c134613/info 2023-06-06 18:52:50,733 INFO [StoreOpener-8496f87023f6c85979bba9a69c134613-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8496f87023f6c85979bba9a69c134613 columnFamilyName info 2023-06-06 18:52:50,734 INFO [StoreOpener-8496f87023f6c85979bba9a69c134613-1] regionserver.HStore(310): Store=8496f87023f6c85979bba9a69c134613/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:52:50,736 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/namespace/8496f87023f6c85979bba9a69c134613 2023-06-06 18:52:50,738 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/namespace/8496f87023f6c85979bba9a69c134613 2023-06-06 18:52:50,743 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 8496f87023f6c85979bba9a69c134613 2023-06-06 18:52:50,747 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/namespace/8496f87023f6c85979bba9a69c134613/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-06 18:52:50,748 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 8496f87023f6c85979bba9a69c134613; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=829661, jitterRate=0.054968684911727905}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-06 18:52:50,748 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 8496f87023f6c85979bba9a69c134613: 2023-06-06 18:52:50,751 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1686077570239.8496f87023f6c85979bba9a69c134613., pid=6, masterSystemTime=1686077570711 2023-06-06 18:52:50,758 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1686077570239.8496f87023f6c85979bba9a69c134613. 2023-06-06 18:52:50,758 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1686077570239.8496f87023f6c85979bba9a69c134613. 2023-06-06 18:52:50,760 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=8496f87023f6c85979bba9a69c134613, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,43601,1686077568629 2023-06-06 18:52:50,760 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1686077570239.8496f87023f6c85979bba9a69c134613.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686077570759"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686077570759"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686077570759"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686077570759"}]},"ts":"1686077570759"} 2023-06-06 18:52:50,768 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-06 18:52:50,768 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 8496f87023f6c85979bba9a69c134613, server=jenkins-hbase20.apache.org,43601,1686077568629 in 208 msec 2023-06-06 18:52:50,773 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-06 18:52:50,773 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=8496f87023f6c85979bba9a69c134613, ASSIGN in 381 msec 2023-06-06 18:52:50,775 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-06 18:52:50,776 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686077570776"}]},"ts":"1686077570776"} 2023-06-06 18:52:50,780 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-06 18:52:50,784 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-06 18:52:50,787 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 544 msec 2023-06-06 18:52:50,852 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-06 18:52:50,854 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-06 18:52:50,854 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:52:50,897 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-06 18:52:50,916 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-06 18:52:50,922 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 31 msec 2023-06-06 18:52:50,932 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-06 18:52:50,944 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-06 18:52:50,949 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 17 msec 2023-06-06 18:52:50,967 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-06 18:52:50,970 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-06 18:52:50,970 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.274sec 2023-06-06 18:52:50,973 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-06 18:52:50,975 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-06 18:52:50,975 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-06 18:52:50,977 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,45465,1686077567708-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-06 18:52:50,978 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,45465,1686077567708-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-06 18:52:50,990 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-06 18:52:51,068 DEBUG [Listener at localhost.localdomain/40767] zookeeper.ReadOnlyZKClient(139): Connect 0x34ea6eef to 127.0.0.1:63828 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-06 18:52:51,072 DEBUG [Listener at localhost.localdomain/40767] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6453d567, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-06 18:52:51,110 DEBUG [hconnection-0x3142b655-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-06 18:52:51,127 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:40036, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-06 18:52:51,136 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,45465,1686077567708 2023-06-06 18:52:51,136 INFO [Listener at localhost.localdomain/40767] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:52:51,144 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-06 18:52:51,144 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:52:51,145 INFO [Listener at localhost.localdomain/40767] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-06 18:52:51,155 DEBUG [Listener at localhost.localdomain/40767] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-06 18:52:51,159 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:56768, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-06 18:52:51,168 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45465] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-06 18:52:51,168 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45465] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-06 18:52:51,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45465] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-06 18:52:51,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45465] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling 2023-06-06 18:52:51,177 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-06-06 18:52:51,179 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-06 18:52:51,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45465] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testSlowSyncLogRolling" procId is: 9 2023-06-06 18:52:51,184 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54 2023-06-06 18:52:51,185 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54 empty. 2023-06-06 18:52:51,187 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54 2023-06-06 18:52:51,187 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testSlowSyncLogRolling regions 2023-06-06 18:52:51,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45465] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-06 18:52:51,216 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/.tabledesc/.tableinfo.0000000001 2023-06-06 18:52:51,219 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7d3ec1626cddaefddbf2bda1e210ec54, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/.tmp 2023-06-06 18:52:51,236 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:52:51,236 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1604): Closing 7d3ec1626cddaefddbf2bda1e210ec54, disabling compactions & flushes 2023-06-06 18:52:51,236 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54. 2023-06-06 18:52:51,236 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54. 2023-06-06 18:52:51,236 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54. after waiting 0 ms 2023-06-06 18:52:51,236 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54. 2023-06-06 18:52:51,236 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54. 2023-06-06 18:52:51,236 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for 7d3ec1626cddaefddbf2bda1e210ec54: 2023-06-06 18:52:51,241 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-06-06 18:52:51,243 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1686077571243"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686077571243"}]},"ts":"1686077571243"} 2023-06-06 18:52:51,246 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-06 18:52:51,248 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-06 18:52:51,249 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686077571248"}]},"ts":"1686077571248"} 2023-06-06 18:52:51,251 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLING in hbase:meta 2023-06-06 18:52:51,254 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=7d3ec1626cddaefddbf2bda1e210ec54, ASSIGN}] 2023-06-06 18:52:51,257 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=7d3ec1626cddaefddbf2bda1e210ec54, ASSIGN 2023-06-06 18:52:51,258 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=7d3ec1626cddaefddbf2bda1e210ec54, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,43601,1686077568629; forceNewPlan=false, retain=false 2023-06-06 18:52:51,410 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=7d3ec1626cddaefddbf2bda1e210ec54, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43601,1686077568629 2023-06-06 18:52:51,411 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1686077571410"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686077571410"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686077571410"}]},"ts":"1686077571410"} 2023-06-06 18:52:51,415 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 7d3ec1626cddaefddbf2bda1e210ec54, server=jenkins-hbase20.apache.org,43601,1686077568629}] 2023-06-06 18:52:51,583 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54. 2023-06-06 18:52:51,583 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7d3ec1626cddaefddbf2bda1e210ec54, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54.', STARTKEY => '', ENDKEY => ''} 2023-06-06 18:52:51,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testSlowSyncLogRolling 7d3ec1626cddaefddbf2bda1e210ec54 2023-06-06 18:52:51,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:52:51,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 7d3ec1626cddaefddbf2bda1e210ec54 2023-06-06 18:52:51,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 7d3ec1626cddaefddbf2bda1e210ec54 2023-06-06 18:52:51,587 INFO [StoreOpener-7d3ec1626cddaefddbf2bda1e210ec54-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 7d3ec1626cddaefddbf2bda1e210ec54 2023-06-06 18:52:51,590 DEBUG [StoreOpener-7d3ec1626cddaefddbf2bda1e210ec54-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info 2023-06-06 18:52:51,590 DEBUG [StoreOpener-7d3ec1626cddaefddbf2bda1e210ec54-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info 2023-06-06 18:52:51,590 INFO [StoreOpener-7d3ec1626cddaefddbf2bda1e210ec54-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7d3ec1626cddaefddbf2bda1e210ec54 columnFamilyName info 2023-06-06 18:52:51,591 INFO [StoreOpener-7d3ec1626cddaefddbf2bda1e210ec54-1] regionserver.HStore(310): Store=7d3ec1626cddaefddbf2bda1e210ec54/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:52:51,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54 2023-06-06 18:52:51,596 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54 2023-06-06 18:52:51,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 7d3ec1626cddaefddbf2bda1e210ec54 2023-06-06 18:52:51,604 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-06 18:52:51,605 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 7d3ec1626cddaefddbf2bda1e210ec54; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=697045, jitterRate=-0.11366234719753265}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-06 18:52:51,605 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 7d3ec1626cddaefddbf2bda1e210ec54: 2023-06-06 18:52:51,607 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54., pid=11, masterSystemTime=1686077571570 2023-06-06 18:52:51,609 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54. 2023-06-06 18:52:51,609 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54. 2023-06-06 18:52:51,610 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=7d3ec1626cddaefddbf2bda1e210ec54, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,43601,1686077568629 2023-06-06 18:52:51,611 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1686077571610"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686077571610"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686077571610"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686077571610"}]},"ts":"1686077571610"} 2023-06-06 18:52:51,617 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-06 18:52:51,618 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 7d3ec1626cddaefddbf2bda1e210ec54, server=jenkins-hbase20.apache.org,43601,1686077568629 in 199 msec 2023-06-06 18:52:51,621 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-06 18:52:51,622 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=7d3ec1626cddaefddbf2bda1e210ec54, ASSIGN in 363 msec 2023-06-06 18:52:51,623 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-06 18:52:51,623 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686077571623"}]},"ts":"1686077571623"} 2023-06-06 18:52:51,626 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLED in hbase:meta 2023-06-06 18:52:51,628 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-06-06 18:52:51,631 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling in 457 msec 2023-06-06 18:52:55,563 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-06-06 18:52:55,715 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-06 18:52:55,717 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-06 18:52:55,719 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testSlowSyncLogRolling' 2023-06-06 18:52:57,834 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-06 18:52:57,835 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-06-06 18:53:01,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45465] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-06 18:53:01,207 INFO [Listener at localhost.localdomain/40767] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testSlowSyncLogRolling, procId: 9 completed 2023-06-06 18:53:01,210 DEBUG [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testSlowSyncLogRolling 2023-06-06 18:53:01,211 DEBUG [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54. 2023-06-06 18:53:13,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43601] regionserver.HRegion(9158): Flush requested on 7d3ec1626cddaefddbf2bda1e210ec54 2023-06-06 18:53:13,262 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 7d3ec1626cddaefddbf2bda1e210ec54 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-06 18:53:13,330 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/.tmp/info/7b488e80f31e419886e98030db811190 2023-06-06 18:53:13,378 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/.tmp/info/7b488e80f31e419886e98030db811190 as hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/7b488e80f31e419886e98030db811190 2023-06-06 18:53:13,390 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/7b488e80f31e419886e98030db811190, entries=7, sequenceid=11, filesize=12.1 K 2023-06-06 18:53:13,393 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 7d3ec1626cddaefddbf2bda1e210ec54 in 131ms, sequenceid=11, compaction requested=false 2023-06-06 18:53:13,394 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 7d3ec1626cddaefddbf2bda1e210ec54: 2023-06-06 18:53:21,485 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 203 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39995,DS-34cba47a-98c0-4e64-aba4-d51c9ebf9445,DISK], DatanodeInfoWithStorage[127.0.0.1:44581,DS-064463e6-2bbb-4a2e-a0c3-4ad385b0c3d5,DISK]] 2023-06-06 18:53:23,694 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39995,DS-34cba47a-98c0-4e64-aba4-d51c9ebf9445,DISK], DatanodeInfoWithStorage[127.0.0.1:44581,DS-064463e6-2bbb-4a2e-a0c3-4ad385b0c3d5,DISK]] 2023-06-06 18:53:25,901 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39995,DS-34cba47a-98c0-4e64-aba4-d51c9ebf9445,DISK], DatanodeInfoWithStorage[127.0.0.1:44581,DS-064463e6-2bbb-4a2e-a0c3-4ad385b0c3d5,DISK]] 2023-06-06 18:53:28,107 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39995,DS-34cba47a-98c0-4e64-aba4-d51c9ebf9445,DISK], DatanodeInfoWithStorage[127.0.0.1:44581,DS-064463e6-2bbb-4a2e-a0c3-4ad385b0c3d5,DISK]] 2023-06-06 18:53:28,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43601] regionserver.HRegion(9158): Flush requested on 7d3ec1626cddaefddbf2bda1e210ec54 2023-06-06 18:53:28,108 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 7d3ec1626cddaefddbf2bda1e210ec54 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-06 18:53:28,310 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39995,DS-34cba47a-98c0-4e64-aba4-d51c9ebf9445,DISK], DatanodeInfoWithStorage[127.0.0.1:44581,DS-064463e6-2bbb-4a2e-a0c3-4ad385b0c3d5,DISK]] 2023-06-06 18:53:28,335 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=21 (bloomFilter=true), to=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/.tmp/info/0779e9b0730a4cdfbd9d12b849b242b9 2023-06-06 18:53:28,347 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/.tmp/info/0779e9b0730a4cdfbd9d12b849b242b9 as hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/0779e9b0730a4cdfbd9d12b849b242b9 2023-06-06 18:53:28,357 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/0779e9b0730a4cdfbd9d12b849b242b9, entries=7, sequenceid=21, filesize=12.1 K 2023-06-06 18:53:28,560 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39995,DS-34cba47a-98c0-4e64-aba4-d51c9ebf9445,DISK], DatanodeInfoWithStorage[127.0.0.1:44581,DS-064463e6-2bbb-4a2e-a0c3-4ad385b0c3d5,DISK]] 2023-06-06 18:53:28,562 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 7d3ec1626cddaefddbf2bda1e210ec54 in 452ms, sequenceid=21, compaction requested=false 2023-06-06 18:53:28,562 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 7d3ec1626cddaefddbf2bda1e210ec54: 2023-06-06 18:53:28,563 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=24.2 K, sizeToCheck=16.0 K 2023-06-06 18:53:28,563 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-06 18:53:28,566 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/7b488e80f31e419886e98030db811190 because midkey is the same as first or last row 2023-06-06 18:53:30,313 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39995,DS-34cba47a-98c0-4e64-aba4-d51c9ebf9445,DISK], DatanodeInfoWithStorage[127.0.0.1:44581,DS-064463e6-2bbb-4a2e-a0c3-4ad385b0c3d5,DISK]] 2023-06-06 18:53:32,516 WARN [sync.4] wal.AbstractFSWAL(1302): Requesting log roll because we exceeded slow sync threshold; count=7, threshold=5, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39995,DS-34cba47a-98c0-4e64-aba4-d51c9ebf9445,DISK], DatanodeInfoWithStorage[127.0.0.1:44581,DS-064463e6-2bbb-4a2e-a0c3-4ad385b0c3d5,DISK]] 2023-06-06 18:53:32,517 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C43601%2C1686077568629:(num 1686077569898) roll requested 2023-06-06 18:53:32,517 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39995,DS-34cba47a-98c0-4e64-aba4-d51c9ebf9445,DISK], DatanodeInfoWithStorage[127.0.0.1:44581,DS-064463e6-2bbb-4a2e-a0c3-4ad385b0c3d5,DISK]] 2023-06-06 18:53:32,731 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39995,DS-34cba47a-98c0-4e64-aba4-d51c9ebf9445,DISK], DatanodeInfoWithStorage[127.0.0.1:44581,DS-064463e6-2bbb-4a2e-a0c3-4ad385b0c3d5,DISK]] 2023-06-06 18:53:32,732 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/WALs/jenkins-hbase20.apache.org,43601,1686077568629/jenkins-hbase20.apache.org%2C43601%2C1686077568629.1686077569898 with entries=24, filesize=20.43 KB; new WAL /user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/WALs/jenkins-hbase20.apache.org,43601,1686077568629/jenkins-hbase20.apache.org%2C43601%2C1686077568629.1686077612518 2023-06-06 18:53:32,733 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44581,DS-064463e6-2bbb-4a2e-a0c3-4ad385b0c3d5,DISK], DatanodeInfoWithStorage[127.0.0.1:39995,DS-34cba47a-98c0-4e64-aba4-d51c9ebf9445,DISK]] 2023-06-06 18:53:32,733 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/WALs/jenkins-hbase20.apache.org,43601,1686077568629/jenkins-hbase20.apache.org%2C43601%2C1686077568629.1686077569898 is not closed yet, will try archiving it next time 2023-06-06 18:53:42,535 INFO [Listener at localhost.localdomain/40767] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-06-06 18:53:47,540 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 5002 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:44581,DS-064463e6-2bbb-4a2e-a0c3-4ad385b0c3d5,DISK], DatanodeInfoWithStorage[127.0.0.1:39995,DS-34cba47a-98c0-4e64-aba4-d51c9ebf9445,DISK]] 2023-06-06 18:53:47,540 WARN [sync.0] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5002 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:44581,DS-064463e6-2bbb-4a2e-a0c3-4ad385b0c3d5,DISK], DatanodeInfoWithStorage[127.0.0.1:39995,DS-34cba47a-98c0-4e64-aba4-d51c9ebf9445,DISK]] 2023-06-06 18:53:47,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43601] regionserver.HRegion(9158): Flush requested on 7d3ec1626cddaefddbf2bda1e210ec54 2023-06-06 18:53:47,541 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C43601%2C1686077568629:(num 1686077612518) roll requested 2023-06-06 18:53:47,541 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 7d3ec1626cddaefddbf2bda1e210ec54 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-06 18:53:49,543 INFO [Listener at localhost.localdomain/40767] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-06-06 18:53:52,544 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 5001 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:44581,DS-064463e6-2bbb-4a2e-a0c3-4ad385b0c3d5,DISK], DatanodeInfoWithStorage[127.0.0.1:39995,DS-34cba47a-98c0-4e64-aba4-d51c9ebf9445,DISK]] 2023-06-06 18:53:52,544 WARN [sync.1] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5001 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:44581,DS-064463e6-2bbb-4a2e-a0c3-4ad385b0c3d5,DISK], DatanodeInfoWithStorage[127.0.0.1:39995,DS-34cba47a-98c0-4e64-aba4-d51c9ebf9445,DISK]] 2023-06-06 18:53:52,561 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:44581,DS-064463e6-2bbb-4a2e-a0c3-4ad385b0c3d5,DISK], DatanodeInfoWithStorage[127.0.0.1:39995,DS-34cba47a-98c0-4e64-aba4-d51c9ebf9445,DISK]] 2023-06-06 18:53:52,561 WARN [sync.2] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:44581,DS-064463e6-2bbb-4a2e-a0c3-4ad385b0c3d5,DISK], DatanodeInfoWithStorage[127.0.0.1:39995,DS-34cba47a-98c0-4e64-aba4-d51c9ebf9445,DISK]] 2023-06-06 18:53:52,563 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/WALs/jenkins-hbase20.apache.org,43601,1686077568629/jenkins-hbase20.apache.org%2C43601%2C1686077568629.1686077612518 with entries=6, filesize=6.07 KB; new WAL /user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/WALs/jenkins-hbase20.apache.org,43601,1686077568629/jenkins-hbase20.apache.org%2C43601%2C1686077568629.1686077627541 2023-06-06 18:53:52,563 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44581,DS-064463e6-2bbb-4a2e-a0c3-4ad385b0c3d5,DISK], DatanodeInfoWithStorage[127.0.0.1:39995,DS-34cba47a-98c0-4e64-aba4-d51c9ebf9445,DISK]] 2023-06-06 18:53:52,563 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/WALs/jenkins-hbase20.apache.org,43601,1686077568629/jenkins-hbase20.apache.org%2C43601%2C1686077568629.1686077612518 is not closed yet, will try archiving it next time 2023-06-06 18:53:52,564 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=31 (bloomFilter=true), to=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/.tmp/info/96383990c5cb4b1f8c486f7d2a41ff9f 2023-06-06 18:53:52,577 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/.tmp/info/96383990c5cb4b1f8c486f7d2a41ff9f as hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/96383990c5cb4b1f8c486f7d2a41ff9f 2023-06-06 18:53:52,587 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/96383990c5cb4b1f8c486f7d2a41ff9f, entries=7, sequenceid=31, filesize=12.1 K 2023-06-06 18:53:52,591 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 7d3ec1626cddaefddbf2bda1e210ec54 in 5050ms, sequenceid=31, compaction requested=true 2023-06-06 18:53:52,591 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 7d3ec1626cddaefddbf2bda1e210ec54: 2023-06-06 18:53:52,591 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=36.3 K, sizeToCheck=16.0 K 2023-06-06 18:53:52,591 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-06 18:53:52,591 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/7b488e80f31e419886e98030db811190 because midkey is the same as first or last row 2023-06-06 18:53:52,593 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-06 18:53:52,593 DEBUG [RS:0;jenkins-hbase20:43601-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-06 18:53:52,598 DEBUG [RS:0;jenkins-hbase20:43601-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 37197 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-06 18:53:52,600 DEBUG [RS:0;jenkins-hbase20:43601-shortCompactions-0] regionserver.HStore(1912): 7d3ec1626cddaefddbf2bda1e210ec54/info is initiating minor compaction (all files) 2023-06-06 18:53:52,600 INFO [RS:0;jenkins-hbase20:43601-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 7d3ec1626cddaefddbf2bda1e210ec54/info in TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54. 2023-06-06 18:53:52,600 INFO [RS:0;jenkins-hbase20:43601-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/7b488e80f31e419886e98030db811190, hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/0779e9b0730a4cdfbd9d12b849b242b9, hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/96383990c5cb4b1f8c486f7d2a41ff9f] into tmpdir=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/.tmp, totalSize=36.3 K 2023-06-06 18:53:52,602 DEBUG [RS:0;jenkins-hbase20:43601-shortCompactions-0] compactions.Compactor(207): Compacting 7b488e80f31e419886e98030db811190, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1686077581215 2023-06-06 18:53:52,603 DEBUG [RS:0;jenkins-hbase20:43601-shortCompactions-0] compactions.Compactor(207): Compacting 0779e9b0730a4cdfbd9d12b849b242b9, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=21, earliestPutTs=1686077595263 2023-06-06 18:53:52,603 DEBUG [RS:0;jenkins-hbase20:43601-shortCompactions-0] compactions.Compactor(207): Compacting 96383990c5cb4b1f8c486f7d2a41ff9f, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=31, earliestPutTs=1686077610111 2023-06-06 18:53:52,632 INFO [RS:0;jenkins-hbase20:43601-shortCompactions-0] throttle.PressureAwareThroughputController(145): 7d3ec1626cddaefddbf2bda1e210ec54#info#compaction#3 average throughput is 10.77 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-06 18:53:52,653 DEBUG [RS:0;jenkins-hbase20:43601-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/.tmp/info/2db898be7afe4f1bb5049f3e8f75a2c4 as hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/2db898be7afe4f1bb5049f3e8f75a2c4 2023-06-06 18:53:52,674 INFO [RS:0;jenkins-hbase20:43601-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 7d3ec1626cddaefddbf2bda1e210ec54/info of 7d3ec1626cddaefddbf2bda1e210ec54 into 2db898be7afe4f1bb5049f3e8f75a2c4(size=27.0 K), total size for store is 27.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-06 18:53:52,675 DEBUG [RS:0;jenkins-hbase20:43601-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 7d3ec1626cddaefddbf2bda1e210ec54: 2023-06-06 18:53:52,675 INFO [RS:0;jenkins-hbase20:43601-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54., storeName=7d3ec1626cddaefddbf2bda1e210ec54/info, priority=13, startTime=1686077632593; duration=0sec 2023-06-06 18:53:52,676 DEBUG [RS:0;jenkins-hbase20:43601-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=27.0 K, sizeToCheck=16.0 K 2023-06-06 18:53:52,676 DEBUG [RS:0;jenkins-hbase20:43601-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-06 18:53:52,677 DEBUG [RS:0;jenkins-hbase20:43601-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/2db898be7afe4f1bb5049f3e8f75a2c4 because midkey is the same as first or last row 2023-06-06 18:53:52,677 DEBUG [RS:0;jenkins-hbase20:43601-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-06 18:54:04,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43601] regionserver.HRegion(9158): Flush requested on 7d3ec1626cddaefddbf2bda1e210ec54 2023-06-06 18:54:04,679 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 7d3ec1626cddaefddbf2bda1e210ec54 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-06 18:54:04,705 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=42 (bloomFilter=true), to=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/.tmp/info/2818f44e990d4e0e95edd26251e387f0 2023-06-06 18:54:04,715 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/.tmp/info/2818f44e990d4e0e95edd26251e387f0 as hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/2818f44e990d4e0e95edd26251e387f0 2023-06-06 18:54:04,723 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/2818f44e990d4e0e95edd26251e387f0, entries=7, sequenceid=42, filesize=12.1 K 2023-06-06 18:54:04,724 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 7d3ec1626cddaefddbf2bda1e210ec54 in 46ms, sequenceid=42, compaction requested=false 2023-06-06 18:54:04,724 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 7d3ec1626cddaefddbf2bda1e210ec54: 2023-06-06 18:54:04,724 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=39.1 K, sizeToCheck=16.0 K 2023-06-06 18:54:04,724 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-06 18:54:04,724 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/2db898be7afe4f1bb5049f3e8f75a2c4 because midkey is the same as first or last row 2023-06-06 18:54:12,694 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-06 18:54:12,698 INFO [Listener at localhost.localdomain/40767] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-06-06 18:54:12,698 DEBUG [Listener at localhost.localdomain/40767] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x34ea6eef to 127.0.0.1:63828 2023-06-06 18:54:12,699 DEBUG [Listener at localhost.localdomain/40767] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:54:12,701 DEBUG [Listener at localhost.localdomain/40767] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-06 18:54:12,701 DEBUG [Listener at localhost.localdomain/40767] util.JVMClusterUtil(257): Found active master hash=628462355, stopped=false 2023-06-06 18:54:12,702 INFO [Listener at localhost.localdomain/40767] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,45465,1686077567708 2023-06-06 18:54:12,704 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): regionserver:43601-0x101c1c407fc0001, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-06 18:54:12,704 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-06 18:54:12,704 INFO [Listener at localhost.localdomain/40767] procedure2.ProcedureExecutor(629): Stopping 2023-06-06 18:54:12,704 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:54:12,705 DEBUG [Listener at localhost.localdomain/40767] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x49e62556 to 127.0.0.1:63828 2023-06-06 18:54:12,705 DEBUG [Listener at localhost.localdomain/40767] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:54:12,706 INFO [Listener at localhost.localdomain/40767] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,43601,1686077568629' ***** 2023-06-06 18:54:12,706 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:54:12,706 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43601-0x101c1c407fc0001, quorum=127.0.0.1:63828, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:54:12,706 INFO [Listener at localhost.localdomain/40767] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-06 18:54:12,707 INFO [RS:0;jenkins-hbase20:43601] regionserver.HeapMemoryManager(220): Stopping 2023-06-06 18:54:12,707 INFO [RS:0;jenkins-hbase20:43601] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-06 18:54:12,707 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-06 18:54:12,707 INFO [RS:0;jenkins-hbase20:43601] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-06 18:54:12,707 INFO [RS:0;jenkins-hbase20:43601] regionserver.HRegionServer(3303): Received CLOSE for 7d3ec1626cddaefddbf2bda1e210ec54 2023-06-06 18:54:12,708 INFO [RS:0;jenkins-hbase20:43601] regionserver.HRegionServer(3303): Received CLOSE for 8496f87023f6c85979bba9a69c134613 2023-06-06 18:54:12,709 INFO [RS:0;jenkins-hbase20:43601] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,43601,1686077568629 2023-06-06 18:54:12,709 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 7d3ec1626cddaefddbf2bda1e210ec54, disabling compactions & flushes 2023-06-06 18:54:12,709 DEBUG [RS:0;jenkins-hbase20:43601] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3acf9ad3 to 127.0.0.1:63828 2023-06-06 18:54:12,709 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54. 2023-06-06 18:54:12,709 DEBUG [RS:0;jenkins-hbase20:43601] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:54:12,709 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54. 2023-06-06 18:54:12,709 INFO [RS:0;jenkins-hbase20:43601] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-06 18:54:12,709 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54. after waiting 0 ms 2023-06-06 18:54:12,709 INFO [RS:0;jenkins-hbase20:43601] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-06 18:54:12,709 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54. 2023-06-06 18:54:12,709 INFO [RS:0;jenkins-hbase20:43601] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-06 18:54:12,709 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 7d3ec1626cddaefddbf2bda1e210ec54 1/1 column families, dataSize=3.15 KB heapSize=3.63 KB 2023-06-06 18:54:12,709 INFO [RS:0;jenkins-hbase20:43601] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-06 18:54:12,710 INFO [RS:0;jenkins-hbase20:43601] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-06-06 18:54:12,710 DEBUG [RS:0;jenkins-hbase20:43601] regionserver.HRegionServer(1478): Online Regions={7d3ec1626cddaefddbf2bda1e210ec54=TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54., 1588230740=hbase:meta,,1.1588230740, 8496f87023f6c85979bba9a69c134613=hbase:namespace,,1686077570239.8496f87023f6c85979bba9a69c134613.} 2023-06-06 18:54:12,710 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-06 18:54:12,710 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-06 18:54:12,710 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-06 18:54:12,710 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-06 18:54:12,711 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-06 18:54:12,711 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.87 KB heapSize=5.38 KB 2023-06-06 18:54:12,712 DEBUG [RS:0;jenkins-hbase20:43601] regionserver.HRegionServer(1504): Waiting on 1588230740, 7d3ec1626cddaefddbf2bda1e210ec54, 8496f87023f6c85979bba9a69c134613 2023-06-06 18:54:12,733 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.15 KB at sequenceid=48 (bloomFilter=true), to=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/.tmp/info/40eb3d7dcca64062b6eeaecdf002e134 2023-06-06 18:54:12,734 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.64 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/1588230740/.tmp/info/19776d21dc674205adab86e9600a31ea 2023-06-06 18:54:12,743 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/.tmp/info/40eb3d7dcca64062b6eeaecdf002e134 as hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/40eb3d7dcca64062b6eeaecdf002e134 2023-06-06 18:54:12,746 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-06-06 18:54:12,747 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-06-06 18:54:12,752 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/40eb3d7dcca64062b6eeaecdf002e134, entries=3, sequenceid=48, filesize=7.9 K 2023-06-06 18:54:12,758 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.15 KB/3228, heapSize ~3.61 KB/3696, currentSize=0 B/0 for 7d3ec1626cddaefddbf2bda1e210ec54 in 49ms, sequenceid=48, compaction requested=true 2023-06-06 18:54:12,762 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/7b488e80f31e419886e98030db811190, hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/0779e9b0730a4cdfbd9d12b849b242b9, hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/96383990c5cb4b1f8c486f7d2a41ff9f] to archive 2023-06-06 18:54:12,763 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=232 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/1588230740/.tmp/table/5a609227f1f84439b63462e1b5fdde84 2023-06-06 18:54:12,763 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-06 18:54:12,769 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/7b488e80f31e419886e98030db811190 to hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/archive/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/7b488e80f31e419886e98030db811190 2023-06-06 18:54:12,772 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/0779e9b0730a4cdfbd9d12b849b242b9 to hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/archive/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/0779e9b0730a4cdfbd9d12b849b242b9 2023-06-06 18:54:12,772 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/1588230740/.tmp/info/19776d21dc674205adab86e9600a31ea as hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/1588230740/info/19776d21dc674205adab86e9600a31ea 2023-06-06 18:54:12,774 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/96383990c5cb4b1f8c486f7d2a41ff9f to hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/archive/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/info/96383990c5cb4b1f8c486f7d2a41ff9f 2023-06-06 18:54:12,780 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/1588230740/info/19776d21dc674205adab86e9600a31ea, entries=20, sequenceid=14, filesize=7.4 K 2023-06-06 18:54:12,781 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/1588230740/.tmp/table/5a609227f1f84439b63462e1b5fdde84 as hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/1588230740/table/5a609227f1f84439b63462e1b5fdde84 2023-06-06 18:54:12,789 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/1588230740/table/5a609227f1f84439b63462e1b5fdde84, entries=4, sequenceid=14, filesize=4.8 K 2023-06-06 18:54:12,790 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.87 KB/2938, heapSize ~5.09 KB/5216, currentSize=0 B/0 for 1588230740 in 79ms, sequenceid=14, compaction requested=false 2023-06-06 18:54:12,800 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-06-06 18:54:12,803 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-06 18:54:12,804 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-06 18:54:12,804 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-06 18:54:12,804 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-06 18:54:12,807 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/default/TestLogRolling-testSlowSyncLogRolling/7d3ec1626cddaefddbf2bda1e210ec54/recovered.edits/51.seqid, newMaxSeqId=51, maxSeqId=1 2023-06-06 18:54:12,809 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54. 2023-06-06 18:54:12,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 7d3ec1626cddaefddbf2bda1e210ec54: 2023-06-06 18:54:12,810 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testSlowSyncLogRolling,,1686077571168.7d3ec1626cddaefddbf2bda1e210ec54. 2023-06-06 18:54:12,810 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 8496f87023f6c85979bba9a69c134613, disabling compactions & flushes 2023-06-06 18:54:12,811 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686077570239.8496f87023f6c85979bba9a69c134613. 2023-06-06 18:54:12,811 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686077570239.8496f87023f6c85979bba9a69c134613. 2023-06-06 18:54:12,811 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686077570239.8496f87023f6c85979bba9a69c134613. after waiting 0 ms 2023-06-06 18:54:12,811 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686077570239.8496f87023f6c85979bba9a69c134613. 2023-06-06 18:54:12,811 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 8496f87023f6c85979bba9a69c134613 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-06 18:54:12,828 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/namespace/8496f87023f6c85979bba9a69c134613/.tmp/info/18ca755fd1c44385ba21156cbb1d0511 2023-06-06 18:54:12,834 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/namespace/8496f87023f6c85979bba9a69c134613/.tmp/info/18ca755fd1c44385ba21156cbb1d0511 as hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/namespace/8496f87023f6c85979bba9a69c134613/info/18ca755fd1c44385ba21156cbb1d0511 2023-06-06 18:54:12,843 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/namespace/8496f87023f6c85979bba9a69c134613/info/18ca755fd1c44385ba21156cbb1d0511, entries=2, sequenceid=6, filesize=4.8 K 2023-06-06 18:54:12,844 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 8496f87023f6c85979bba9a69c134613 in 33ms, sequenceid=6, compaction requested=false 2023-06-06 18:54:12,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/data/hbase/namespace/8496f87023f6c85979bba9a69c134613/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-06 18:54:12,852 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686077570239.8496f87023f6c85979bba9a69c134613. 2023-06-06 18:54:12,852 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 8496f87023f6c85979bba9a69c134613: 2023-06-06 18:54:12,852 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1686077570239.8496f87023f6c85979bba9a69c134613. 2023-06-06 18:54:12,912 INFO [RS:0;jenkins-hbase20:43601] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,43601,1686077568629; all regions closed. 2023-06-06 18:54:12,914 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/WALs/jenkins-hbase20.apache.org,43601,1686077568629 2023-06-06 18:54:12,928 DEBUG [RS:0;jenkins-hbase20:43601] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/oldWALs 2023-06-06 18:54:12,928 INFO [RS:0;jenkins-hbase20:43601] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C43601%2C1686077568629.meta:.meta(num 1686077569998) 2023-06-06 18:54:12,929 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/WALs/jenkins-hbase20.apache.org,43601,1686077568629 2023-06-06 18:54:12,942 DEBUG [RS:0;jenkins-hbase20:43601] wal.AbstractFSWAL(1028): Moved 3 WAL file(s) to /user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/oldWALs 2023-06-06 18:54:12,942 INFO [RS:0;jenkins-hbase20:43601] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C43601%2C1686077568629:(num 1686077627541) 2023-06-06 18:54:12,942 DEBUG [RS:0;jenkins-hbase20:43601] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:54:12,942 INFO [RS:0;jenkins-hbase20:43601] regionserver.LeaseManager(133): Closed leases 2023-06-06 18:54:12,942 INFO [RS:0;jenkins-hbase20:43601] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-06 18:54:12,943 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-06 18:54:12,943 INFO [RS:0;jenkins-hbase20:43601] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:43601 2023-06-06 18:54:12,950 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-06 18:54:12,950 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): regionserver:43601-0x101c1c407fc0001, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,43601,1686077568629 2023-06-06 18:54:12,951 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): regionserver:43601-0x101c1c407fc0001, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-06 18:54:12,952 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,43601,1686077568629] 2023-06-06 18:54:12,952 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,43601,1686077568629; numProcessing=1 2023-06-06 18:54:12,953 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,43601,1686077568629 already deleted, retry=false 2023-06-06 18:54:12,954 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,43601,1686077568629 expired; onlineServers=0 2023-06-06 18:54:12,954 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,45465,1686077567708' ***** 2023-06-06 18:54:12,954 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-06 18:54:12,954 DEBUG [M:0;jenkins-hbase20:45465] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@64489928, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-06 18:54:12,954 INFO [M:0;jenkins-hbase20:45465] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,45465,1686077567708 2023-06-06 18:54:12,954 INFO [M:0;jenkins-hbase20:45465] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,45465,1686077567708; all regions closed. 2023-06-06 18:54:12,954 DEBUG [M:0;jenkins-hbase20:45465] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:54:12,954 DEBUG [M:0;jenkins-hbase20:45465] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-06 18:54:12,955 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-06 18:54:12,955 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686077569510] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686077569510,5,FailOnTimeoutGroup] 2023-06-06 18:54:12,955 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686077569509] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686077569509,5,FailOnTimeoutGroup] 2023-06-06 18:54:12,955 DEBUG [M:0;jenkins-hbase20:45465] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-06 18:54:12,956 INFO [M:0;jenkins-hbase20:45465] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-06 18:54:12,956 INFO [M:0;jenkins-hbase20:45465] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-06 18:54:12,956 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-06 18:54:12,956 INFO [M:0;jenkins-hbase20:45465] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-06-06 18:54:12,956 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:54:12,957 DEBUG [M:0;jenkins-hbase20:45465] master.HMaster(1512): Stopping service threads 2023-06-06 18:54:12,957 INFO [M:0;jenkins-hbase20:45465] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-06 18:54:12,957 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-06 18:54:12,958 INFO [M:0;jenkins-hbase20:45465] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-06 18:54:12,958 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-06 18:54:12,958 DEBUG [M:0;jenkins-hbase20:45465] zookeeper.ZKUtil(398): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-06 18:54:12,959 WARN [M:0;jenkins-hbase20:45465] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-06 18:54:12,959 INFO [M:0;jenkins-hbase20:45465] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-06 18:54:12,959 INFO [M:0;jenkins-hbase20:45465] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-06 18:54:12,960 DEBUG [M:0;jenkins-hbase20:45465] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-06 18:54:12,960 INFO [M:0;jenkins-hbase20:45465] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:54:12,960 DEBUG [M:0;jenkins-hbase20:45465] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:54:12,960 DEBUG [M:0;jenkins-hbase20:45465] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-06 18:54:12,960 DEBUG [M:0;jenkins-hbase20:45465] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:54:12,960 INFO [M:0;jenkins-hbase20:45465] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.31 KB heapSize=46.76 KB 2023-06-06 18:54:12,980 INFO [M:0;jenkins-hbase20:45465] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.31 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/26337852d07642ed992ef71535133254 2023-06-06 18:54:12,985 INFO [M:0;jenkins-hbase20:45465] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 26337852d07642ed992ef71535133254 2023-06-06 18:54:12,986 DEBUG [M:0;jenkins-hbase20:45465] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/26337852d07642ed992ef71535133254 as hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/26337852d07642ed992ef71535133254 2023-06-06 18:54:12,992 INFO [M:0;jenkins-hbase20:45465] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 26337852d07642ed992ef71535133254 2023-06-06 18:54:12,992 INFO [M:0;jenkins-hbase20:45465] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/26337852d07642ed992ef71535133254, entries=11, sequenceid=100, filesize=6.1 K 2023-06-06 18:54:12,993 INFO [M:0;jenkins-hbase20:45465] regionserver.HRegion(2948): Finished flush of dataSize ~38.31 KB/39234, heapSize ~46.74 KB/47864, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 33ms, sequenceid=100, compaction requested=false 2023-06-06 18:54:12,995 INFO [M:0;jenkins-hbase20:45465] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:54:12,995 DEBUG [M:0;jenkins-hbase20:45465] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-06 18:54:12,995 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/MasterData/WALs/jenkins-hbase20.apache.org,45465,1686077567708 2023-06-06 18:54:12,999 INFO [M:0;jenkins-hbase20:45465] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-06 18:54:12,999 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-06 18:54:13,000 INFO [M:0;jenkins-hbase20:45465] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:45465 2023-06-06 18:54:13,001 DEBUG [M:0;jenkins-hbase20:45465] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,45465,1686077567708 already deleted, retry=false 2023-06-06 18:54:13,052 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): regionserver:43601-0x101c1c407fc0001, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:54:13,052 INFO [RS:0;jenkins-hbase20:43601] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,43601,1686077568629; zookeeper connection closed. 2023-06-06 18:54:13,052 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): regionserver:43601-0x101c1c407fc0001, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:54:13,053 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@15b587aa] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@15b587aa 2023-06-06 18:54:13,054 INFO [Listener at localhost.localdomain/40767] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-06 18:54:13,153 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:54:13,153 DEBUG [Listener at localhost.localdomain/40767-EventThread] zookeeper.ZKWatcher(600): master:45465-0x101c1c407fc0000, quorum=127.0.0.1:63828, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:54:13,153 INFO [M:0;jenkins-hbase20:45465] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,45465,1686077567708; zookeeper connection closed. 2023-06-06 18:54:13,158 WARN [Listener at localhost.localdomain/40767] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-06 18:54:13,165 INFO [Listener at localhost.localdomain/40767] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-06 18:54:13,277 WARN [BP-950104307-148.251.75.209-1686077564510 heartbeating to localhost.localdomain/127.0.0.1:34031] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-06 18:54:13,277 WARN [BP-950104307-148.251.75.209-1686077564510 heartbeating to localhost.localdomain/127.0.0.1:34031] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-950104307-148.251.75.209-1686077564510 (Datanode Uuid ed1c069d-7f1b-45c7-adf9-82a007441050) service to localhost.localdomain/127.0.0.1:34031 2023-06-06 18:54:13,279 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/cluster_9d874f28-a384-ff7f-6af1-056978b59f8c/dfs/data/data3/current/BP-950104307-148.251.75.209-1686077564510] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:54:13,279 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/cluster_9d874f28-a384-ff7f-6af1-056978b59f8c/dfs/data/data4/current/BP-950104307-148.251.75.209-1686077564510] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:54:13,280 WARN [Listener at localhost.localdomain/40767] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-06 18:54:13,282 INFO [Listener at localhost.localdomain/40767] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-06 18:54:13,391 WARN [BP-950104307-148.251.75.209-1686077564510 heartbeating to localhost.localdomain/127.0.0.1:34031] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-06 18:54:13,391 WARN [BP-950104307-148.251.75.209-1686077564510 heartbeating to localhost.localdomain/127.0.0.1:34031] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-950104307-148.251.75.209-1686077564510 (Datanode Uuid e26f4d78-5dee-4f66-aa6f-dfdb0475f216) service to localhost.localdomain/127.0.0.1:34031 2023-06-06 18:54:13,392 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/cluster_9d874f28-a384-ff7f-6af1-056978b59f8c/dfs/data/data1/current/BP-950104307-148.251.75.209-1686077564510] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:54:13,392 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/cluster_9d874f28-a384-ff7f-6af1-056978b59f8c/dfs/data/data2/current/BP-950104307-148.251.75.209-1686077564510] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:54:13,425 INFO [Listener at localhost.localdomain/40767] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-06-06 18:54:13,543 INFO [Listener at localhost.localdomain/40767] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-06 18:54:13,576 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-06 18:54:13,586 INFO [Listener at localhost.localdomain/40767] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=51 (was 10) Potentially hanging thread: nioEventLoopGroup-5-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1961072667) connection to localhost.localdomain/127.0.0.1:34031 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-5-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: regionserver/jenkins-hbase20:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Potentially hanging thread: RS-EventLoopGroup-1-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.0@localhost.localdomain:34031 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Monitor thread for TaskMonitor java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:327) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:34031 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/40767 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-2 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SnapshotHandlerChoreCleaner sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SessionTracker java.lang.Thread.sleep(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:151) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase20:0.leaseChecker java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.regionserver.LeaseManager.run(LeaseManager.java:82) Potentially hanging thread: master/jenkins-hbase20:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcClient-timer-pool-0 java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:600) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:496) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Idle-Rpc-Conn-Sweeper-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase20:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HBase-Metrics2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3693) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.PeerCache@77fa5f82 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1961072667) connection to localhost.localdomain/127.0.0.1:34031 from jenkins.hfs.0 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-2-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1961072667) connection to localhost.localdomain/127.0.0.1:34031 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) - Thread LEAK? -, OpenFileDescriptor=438 (was 263) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=109 (was 324), ProcessCount=170 (was 169) - ProcessCount LEAK? -, AvailableMemoryMB=5857 (was 7126) 2023-06-06 18:54:13,594 INFO [Listener at localhost.localdomain/40767] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=52, OpenFileDescriptor=438, MaxFileDescriptor=60000, SystemLoadAverage=109, ProcessCount=170, AvailableMemoryMB=5857 2023-06-06 18:54:13,595 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-06 18:54:13,595 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/hadoop.log.dir so I do NOT create it in target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b 2023-06-06 18:54:13,595 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5c069203-4a28-023b-9214-2186a5ff703e/hadoop.tmp.dir so I do NOT create it in target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b 2023-06-06 18:54:13,595 INFO [Listener at localhost.localdomain/40767] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d, deleteOnExit=true 2023-06-06 18:54:13,595 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-06 18:54:13,596 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/test.cache.data in system properties and HBase conf 2023-06-06 18:54:13,596 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/hadoop.tmp.dir in system properties and HBase conf 2023-06-06 18:54:13,596 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/hadoop.log.dir in system properties and HBase conf 2023-06-06 18:54:13,596 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-06 18:54:13,596 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-06 18:54:13,596 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-06 18:54:13,596 DEBUG [Listener at localhost.localdomain/40767] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-06 18:54:13,597 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-06 18:54:13,597 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-06 18:54:13,597 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-06 18:54:13,597 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-06 18:54:13,597 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-06 18:54:13,597 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-06 18:54:13,597 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-06 18:54:13,597 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-06 18:54:13,598 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-06 18:54:13,598 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/nfs.dump.dir in system properties and HBase conf 2023-06-06 18:54:13,598 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/java.io.tmpdir in system properties and HBase conf 2023-06-06 18:54:13,598 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-06 18:54:13,598 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-06 18:54:13,598 INFO [Listener at localhost.localdomain/40767] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-06 18:54:13,600 WARN [Listener at localhost.localdomain/40767] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-06 18:54:13,602 WARN [Listener at localhost.localdomain/40767] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-06 18:54:13,602 WARN [Listener at localhost.localdomain/40767] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-06 18:54:13,628 WARN [Listener at localhost.localdomain/40767] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:54:13,630 INFO [Listener at localhost.localdomain/40767] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:54:13,638 INFO [Listener at localhost.localdomain/40767] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/java.io.tmpdir/Jetty_localhost_localdomain_35635_hdfs____8ab70w/webapp 2023-06-06 18:54:13,727 INFO [Listener at localhost.localdomain/40767] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:35635 2023-06-06 18:54:13,729 WARN [Listener at localhost.localdomain/40767] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-06 18:54:13,730 WARN [Listener at localhost.localdomain/40767] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-06 18:54:13,731 WARN [Listener at localhost.localdomain/40767] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-06 18:54:13,751 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-06 18:54:13,757 WARN [Listener at localhost.localdomain/44371] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:54:13,769 WARN [Listener at localhost.localdomain/44371] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-06 18:54:13,772 WARN [Listener at localhost.localdomain/44371] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:54:13,773 INFO [Listener at localhost.localdomain/44371] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:54:13,779 INFO [Listener at localhost.localdomain/44371] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/java.io.tmpdir/Jetty_localhost_38253_datanode____.c96g26/webapp 2023-06-06 18:54:13,850 INFO [Listener at localhost.localdomain/44371] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38253 2023-06-06 18:54:13,856 WARN [Listener at localhost.localdomain/43085] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:54:13,870 WARN [Listener at localhost.localdomain/43085] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-06 18:54:13,873 WARN [Listener at localhost.localdomain/43085] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:54:13,874 INFO [Listener at localhost.localdomain/43085] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:54:13,882 INFO [Listener at localhost.localdomain/43085] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/java.io.tmpdir/Jetty_localhost_36983_datanode____c0fbq0/webapp 2023-06-06 18:54:13,937 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2c4afa50033090fa: Processing first storage report for DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e from datanode 1976133d-36b0-4957-81f6-fd8b4c98e8bf 2023-06-06 18:54:13,937 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2c4afa50033090fa: from storage DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e node DatanodeRegistration(127.0.0.1:43943, datanodeUuid=1976133d-36b0-4957-81f6-fd8b4c98e8bf, infoPort=40419, infoSecurePort=0, ipcPort=43085, storageInfo=lv=-57;cid=testClusterID;nsid=1580867112;c=1686077653604), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:54:13,937 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2c4afa50033090fa: Processing first storage report for DS-1366d7a9-dee6-4661-96e3-bd02f5b1e989 from datanode 1976133d-36b0-4957-81f6-fd8b4c98e8bf 2023-06-06 18:54:13,937 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2c4afa50033090fa: from storage DS-1366d7a9-dee6-4661-96e3-bd02f5b1e989 node DatanodeRegistration(127.0.0.1:43943, datanodeUuid=1976133d-36b0-4957-81f6-fd8b4c98e8bf, infoPort=40419, infoSecurePort=0, ipcPort=43085, storageInfo=lv=-57;cid=testClusterID;nsid=1580867112;c=1686077653604), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:54:13,973 INFO [Listener at localhost.localdomain/43085] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36983 2023-06-06 18:54:13,982 WARN [Listener at localhost.localdomain/33801] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:54:14,047 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xef721a125580a738: Processing first storage report for DS-99ad7fc0-7940-4a82-936a-54815518e387 from datanode 3664d1ed-04b7-4984-8af6-3273fcedd7ee 2023-06-06 18:54:14,048 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xef721a125580a738: from storage DS-99ad7fc0-7940-4a82-936a-54815518e387 node DatanodeRegistration(127.0.0.1:37031, datanodeUuid=3664d1ed-04b7-4984-8af6-3273fcedd7ee, infoPort=39259, infoSecurePort=0, ipcPort=33801, storageInfo=lv=-57;cid=testClusterID;nsid=1580867112;c=1686077653604), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:54:14,048 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xef721a125580a738: Processing first storage report for DS-2eda66b2-5b6e-465f-b5a5-aec18bf6337b from datanode 3664d1ed-04b7-4984-8af6-3273fcedd7ee 2023-06-06 18:54:14,048 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xef721a125580a738: from storage DS-2eda66b2-5b6e-465f-b5a5-aec18bf6337b node DatanodeRegistration(127.0.0.1:37031, datanodeUuid=3664d1ed-04b7-4984-8af6-3273fcedd7ee, infoPort=39259, infoSecurePort=0, ipcPort=33801, storageInfo=lv=-57;cid=testClusterID;nsid=1580867112;c=1686077653604), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:54:14,092 DEBUG [Listener at localhost.localdomain/33801] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b 2023-06-06 18:54:14,096 INFO [Listener at localhost.localdomain/33801] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/zookeeper_0, clientPort=62595, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-06 18:54:14,098 INFO [Listener at localhost.localdomain/33801] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=62595 2023-06-06 18:54:14,098 INFO [Listener at localhost.localdomain/33801] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:54:14,100 INFO [Listener at localhost.localdomain/33801] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:54:14,120 INFO [Listener at localhost.localdomain/33801] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5 with version=8 2023-06-06 18:54:14,121 INFO [Listener at localhost.localdomain/33801] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/hbase-staging 2023-06-06 18:54:14,123 INFO [Listener at localhost.localdomain/33801] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-06-06 18:54:14,123 INFO [Listener at localhost.localdomain/33801] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:54:14,123 INFO [Listener at localhost.localdomain/33801] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-06 18:54:14,123 INFO [Listener at localhost.localdomain/33801] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-06 18:54:14,123 INFO [Listener at localhost.localdomain/33801] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:54:14,123 INFO [Listener at localhost.localdomain/33801] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-06 18:54:14,123 INFO [Listener at localhost.localdomain/33801] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-06 18:54:14,125 INFO [Listener at localhost.localdomain/33801] ipc.NettyRpcServer(120): Bind to /148.251.75.209:45701 2023-06-06 18:54:14,125 INFO [Listener at localhost.localdomain/33801] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:54:14,126 INFO [Listener at localhost.localdomain/33801] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:54:14,128 INFO [Listener at localhost.localdomain/33801] zookeeper.RecoverableZooKeeper(93): Process identifier=master:45701 connecting to ZooKeeper ensemble=127.0.0.1:62595 2023-06-06 18:54:14,133 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:457010x0, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-06 18:54:14,134 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:45701-0x101c1c55c7a0000 connected 2023-06-06 18:54:14,150 DEBUG [Listener at localhost.localdomain/33801] zookeeper.ZKUtil(164): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-06 18:54:14,151 DEBUG [Listener at localhost.localdomain/33801] zookeeper.ZKUtil(164): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:54:14,151 DEBUG [Listener at localhost.localdomain/33801] zookeeper.ZKUtil(164): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-06 18:54:14,152 DEBUG [Listener at localhost.localdomain/33801] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45701 2023-06-06 18:54:14,153 DEBUG [Listener at localhost.localdomain/33801] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45701 2023-06-06 18:54:14,154 DEBUG [Listener at localhost.localdomain/33801] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45701 2023-06-06 18:54:14,155 DEBUG [Listener at localhost.localdomain/33801] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45701 2023-06-06 18:54:14,155 DEBUG [Listener at localhost.localdomain/33801] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45701 2023-06-06 18:54:14,155 INFO [Listener at localhost.localdomain/33801] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5, hbase.cluster.distributed=false 2023-06-06 18:54:14,167 INFO [Listener at localhost.localdomain/33801] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-06-06 18:54:14,167 INFO [Listener at localhost.localdomain/33801] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:54:14,168 INFO [Listener at localhost.localdomain/33801] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-06 18:54:14,168 INFO [Listener at localhost.localdomain/33801] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-06 18:54:14,168 INFO [Listener at localhost.localdomain/33801] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:54:14,168 INFO [Listener at localhost.localdomain/33801] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-06 18:54:14,168 INFO [Listener at localhost.localdomain/33801] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-06 18:54:14,171 INFO [Listener at localhost.localdomain/33801] ipc.NettyRpcServer(120): Bind to /148.251.75.209:41189 2023-06-06 18:54:14,171 INFO [Listener at localhost.localdomain/33801] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-06 18:54:14,172 DEBUG [Listener at localhost.localdomain/33801] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-06 18:54:14,173 INFO [Listener at localhost.localdomain/33801] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:54:14,174 INFO [Listener at localhost.localdomain/33801] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:54:14,175 INFO [Listener at localhost.localdomain/33801] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41189 connecting to ZooKeeper ensemble=127.0.0.1:62595 2023-06-06 18:54:14,178 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): regionserver:411890x0, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-06 18:54:14,179 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41189-0x101c1c55c7a0001 connected 2023-06-06 18:54:14,179 DEBUG [Listener at localhost.localdomain/33801] zookeeper.ZKUtil(164): regionserver:41189-0x101c1c55c7a0001, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-06 18:54:14,180 DEBUG [Listener at localhost.localdomain/33801] zookeeper.ZKUtil(164): regionserver:41189-0x101c1c55c7a0001, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:54:14,180 DEBUG [Listener at localhost.localdomain/33801] zookeeper.ZKUtil(164): regionserver:41189-0x101c1c55c7a0001, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-06 18:54:14,182 DEBUG [Listener at localhost.localdomain/33801] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41189 2023-06-06 18:54:14,182 DEBUG [Listener at localhost.localdomain/33801] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41189 2023-06-06 18:54:14,182 DEBUG [Listener at localhost.localdomain/33801] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41189 2023-06-06 18:54:14,183 DEBUG [Listener at localhost.localdomain/33801] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41189 2023-06-06 18:54:14,183 DEBUG [Listener at localhost.localdomain/33801] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41189 2023-06-06 18:54:14,184 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,45701,1686077654122 2023-06-06 18:54:14,193 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-06 18:54:14,193 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,45701,1686077654122 2023-06-06 18:54:14,206 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): regionserver:41189-0x101c1c55c7a0001, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-06 18:54:14,206 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-06 18:54:14,206 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:54:14,207 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-06 18:54:14,208 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,45701,1686077654122 from backup master directory 2023-06-06 18:54:14,209 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-06 18:54:14,210 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,45701,1686077654122 2023-06-06 18:54:14,210 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-06 18:54:14,210 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-06 18:54:14,210 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,45701,1686077654122 2023-06-06 18:54:14,229 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/hbase.id with ID: 0cecc49b-7bc7-4ea3-a776-2cbce8608635 2023-06-06 18:54:14,243 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:54:14,246 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:54:14,258 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x6b6c2b66 to 127.0.0.1:62595 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-06 18:54:14,263 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6891a4f3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-06 18:54:14,263 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-06 18:54:14,264 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-06 18:54:14,265 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-06 18:54:14,266 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/MasterData/data/master/store-tmp 2023-06-06 18:54:14,281 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:54:14,281 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-06 18:54:14,281 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:54:14,281 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:54:14,281 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-06 18:54:14,281 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:54:14,281 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:54:14,281 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-06 18:54:14,282 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/MasterData/WALs/jenkins-hbase20.apache.org,45701,1686077654122 2023-06-06 18:54:14,285 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C45701%2C1686077654122, suffix=, logDir=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/MasterData/WALs/jenkins-hbase20.apache.org,45701,1686077654122, archiveDir=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/MasterData/oldWALs, maxLogs=10 2023-06-06 18:54:14,292 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/MasterData/WALs/jenkins-hbase20.apache.org,45701,1686077654122/jenkins-hbase20.apache.org%2C45701%2C1686077654122.1686077654285 2023-06-06 18:54:14,292 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37031,DS-99ad7fc0-7940-4a82-936a-54815518e387,DISK], DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK]] 2023-06-06 18:54:14,292 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-06 18:54:14,292 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:54:14,292 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:54:14,293 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:54:14,295 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:54:14,297 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-06 18:54:14,297 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-06 18:54:14,298 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:54:14,299 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:54:14,300 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:54:14,305 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:54:14,308 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-06 18:54:14,308 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=799994, jitterRate=0.01724572479724884}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-06 18:54:14,309 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-06 18:54:14,309 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-06 18:54:14,311 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-06 18:54:14,311 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-06 18:54:14,311 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-06 18:54:14,312 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-06 18:54:14,313 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-06 18:54:14,313 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-06 18:54:14,314 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-06 18:54:14,315 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-06 18:54:14,325 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-06 18:54:14,325 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-06 18:54:14,325 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-06 18:54:14,326 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-06 18:54:14,327 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-06 18:54:14,329 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:54:14,330 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-06 18:54:14,331 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-06 18:54:14,332 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-06 18:54:14,332 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-06 18:54:14,332 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): regionserver:41189-0x101c1c55c7a0001, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-06 18:54:14,332 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:54:14,334 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,45701,1686077654122, sessionid=0x101c1c55c7a0000, setting cluster-up flag (Was=false) 2023-06-06 18:54:14,336 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:54:14,338 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-06 18:54:14,340 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,45701,1686077654122 2023-06-06 18:54:14,342 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:54:14,345 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-06 18:54:14,346 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,45701,1686077654122 2023-06-06 18:54:14,347 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/.hbase-snapshot/.tmp 2023-06-06 18:54:14,350 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-06 18:54:14,350 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-06 18:54:14,350 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-06 18:54:14,350 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-06 18:54:14,350 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-06 18:54:14,350 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-06-06 18:54:14,350 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:54:14,351 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-06 18:54:14,351 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:54:14,353 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1686077684353 2023-06-06 18:54:14,353 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-06 18:54:14,354 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-06 18:54:14,354 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-06 18:54:14,354 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-06 18:54:14,354 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-06 18:54:14,354 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-06 18:54:14,357 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-06 18:54:14,357 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-06 18:54:14,357 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-06 18:54:14,358 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-06 18:54:14,358 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-06 18:54:14,358 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-06 18:54:14,358 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-06 18:54:14,358 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-06 18:54:14,359 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686077654358,5,FailOnTimeoutGroup] 2023-06-06 18:54:14,359 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686077654359,5,FailOnTimeoutGroup] 2023-06-06 18:54:14,359 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-06 18:54:14,359 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-06 18:54:14,359 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-06 18:54:14,359 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-06 18:54:14,359 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-06 18:54:14,372 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-06 18:54:14,372 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-06 18:54:14,372 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5 2023-06-06 18:54:14,383 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:54:14,385 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-06 18:54:14,386 INFO [RS:0;jenkins-hbase20:41189] regionserver.HRegionServer(951): ClusterId : 0cecc49b-7bc7-4ea3-a776-2cbce8608635 2023-06-06 18:54:14,387 DEBUG [RS:0;jenkins-hbase20:41189] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-06 18:54:14,387 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/hbase/meta/1588230740/info 2023-06-06 18:54:14,388 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-06 18:54:14,389 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:54:14,389 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-06 18:54:14,389 DEBUG [RS:0;jenkins-hbase20:41189] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-06 18:54:14,390 DEBUG [RS:0;jenkins-hbase20:41189] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-06 18:54:14,392 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/hbase/meta/1588230740/rep_barrier 2023-06-06 18:54:14,392 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-06 18:54:14,392 DEBUG [RS:0;jenkins-hbase20:41189] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-06 18:54:14,394 DEBUG [RS:0;jenkins-hbase20:41189] zookeeper.ReadOnlyZKClient(139): Connect 0x07a4e54a to 127.0.0.1:62595 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-06 18:54:14,394 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:54:14,395 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-06 18:54:14,398 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/hbase/meta/1588230740/table 2023-06-06 18:54:14,399 DEBUG [RS:0;jenkins-hbase20:41189] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@9631bce, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-06 18:54:14,399 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-06 18:54:14,399 DEBUG [RS:0;jenkins-hbase20:41189] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@d5c8e3a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-06 18:54:14,400 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:54:14,402 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/hbase/meta/1588230740 2023-06-06 18:54:14,403 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/hbase/meta/1588230740 2023-06-06 18:54:14,406 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-06 18:54:14,407 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-06 18:54:14,409 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-06 18:54:14,410 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=845179, jitterRate=0.07470156252384186}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-06 18:54:14,410 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-06 18:54:14,410 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-06 18:54:14,410 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-06 18:54:14,410 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-06 18:54:14,410 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-06 18:54:14,410 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-06 18:54:14,411 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-06 18:54:14,411 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-06 18:54:14,413 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-06 18:54:14,413 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-06 18:54:14,413 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-06 18:54:14,414 DEBUG [RS:0;jenkins-hbase20:41189] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:41189 2023-06-06 18:54:14,414 INFO [RS:0;jenkins-hbase20:41189] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-06 18:54:14,414 INFO [RS:0;jenkins-hbase20:41189] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-06 18:54:14,414 DEBUG [RS:0;jenkins-hbase20:41189] regionserver.HRegionServer(1022): About to register with Master. 2023-06-06 18:54:14,415 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-06 18:54:14,416 INFO [RS:0;jenkins-hbase20:41189] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,45701,1686077654122 with isa=jenkins-hbase20.apache.org/148.251.75.209:41189, startcode=1686077654167 2023-06-06 18:54:14,416 DEBUG [RS:0;jenkins-hbase20:41189] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-06 18:54:14,416 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-06 18:54:14,420 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:47917, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-06-06 18:54:14,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45701] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,41189,1686077654167 2023-06-06 18:54:14,421 DEBUG [RS:0;jenkins-hbase20:41189] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5 2023-06-06 18:54:14,421 DEBUG [RS:0;jenkins-hbase20:41189] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:44371 2023-06-06 18:54:14,421 DEBUG [RS:0;jenkins-hbase20:41189] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-06 18:54:14,423 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-06 18:54:14,424 DEBUG [RS:0;jenkins-hbase20:41189] zookeeper.ZKUtil(162): regionserver:41189-0x101c1c55c7a0001, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,41189,1686077654167 2023-06-06 18:54:14,424 WARN [RS:0;jenkins-hbase20:41189] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-06 18:54:14,424 INFO [RS:0;jenkins-hbase20:41189] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-06 18:54:14,424 DEBUG [RS:0;jenkins-hbase20:41189] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167 2023-06-06 18:54:14,424 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,41189,1686077654167] 2023-06-06 18:54:14,428 DEBUG [RS:0;jenkins-hbase20:41189] zookeeper.ZKUtil(162): regionserver:41189-0x101c1c55c7a0001, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,41189,1686077654167 2023-06-06 18:54:14,429 DEBUG [RS:0;jenkins-hbase20:41189] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-06 18:54:14,429 INFO [RS:0;jenkins-hbase20:41189] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-06 18:54:14,431 INFO [RS:0;jenkins-hbase20:41189] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-06 18:54:14,431 INFO [RS:0;jenkins-hbase20:41189] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-06 18:54:14,431 INFO [RS:0;jenkins-hbase20:41189] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-06 18:54:14,431 INFO [RS:0;jenkins-hbase20:41189] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-06 18:54:14,433 INFO [RS:0;jenkins-hbase20:41189] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-06 18:54:14,433 DEBUG [RS:0;jenkins-hbase20:41189] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:54:14,433 DEBUG [RS:0;jenkins-hbase20:41189] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:54:14,433 DEBUG [RS:0;jenkins-hbase20:41189] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:54:14,433 DEBUG [RS:0;jenkins-hbase20:41189] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:54:14,433 DEBUG [RS:0;jenkins-hbase20:41189] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:54:14,433 DEBUG [RS:0;jenkins-hbase20:41189] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-06 18:54:14,433 DEBUG [RS:0;jenkins-hbase20:41189] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:54:14,434 DEBUG [RS:0;jenkins-hbase20:41189] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:54:14,434 DEBUG [RS:0;jenkins-hbase20:41189] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:54:14,434 DEBUG [RS:0;jenkins-hbase20:41189] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:54:14,438 INFO [RS:0;jenkins-hbase20:41189] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-06 18:54:14,438 INFO [RS:0;jenkins-hbase20:41189] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-06 18:54:14,439 INFO [RS:0;jenkins-hbase20:41189] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-06 18:54:14,454 INFO [RS:0;jenkins-hbase20:41189] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-06 18:54:14,454 INFO [RS:0;jenkins-hbase20:41189] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,41189,1686077654167-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-06 18:54:14,465 INFO [RS:0;jenkins-hbase20:41189] regionserver.Replication(203): jenkins-hbase20.apache.org,41189,1686077654167 started 2023-06-06 18:54:14,465 INFO [RS:0;jenkins-hbase20:41189] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,41189,1686077654167, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:41189, sessionid=0x101c1c55c7a0001 2023-06-06 18:54:14,465 DEBUG [RS:0;jenkins-hbase20:41189] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-06 18:54:14,465 DEBUG [RS:0;jenkins-hbase20:41189] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,41189,1686077654167 2023-06-06 18:54:14,465 DEBUG [RS:0;jenkins-hbase20:41189] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,41189,1686077654167' 2023-06-06 18:54:14,465 DEBUG [RS:0;jenkins-hbase20:41189] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-06 18:54:14,466 DEBUG [RS:0;jenkins-hbase20:41189] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-06 18:54:14,467 DEBUG [RS:0;jenkins-hbase20:41189] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-06 18:54:14,467 DEBUG [RS:0;jenkins-hbase20:41189] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-06 18:54:14,467 DEBUG [RS:0;jenkins-hbase20:41189] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,41189,1686077654167 2023-06-06 18:54:14,467 DEBUG [RS:0;jenkins-hbase20:41189] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,41189,1686077654167' 2023-06-06 18:54:14,467 DEBUG [RS:0;jenkins-hbase20:41189] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-06 18:54:14,467 DEBUG [RS:0;jenkins-hbase20:41189] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-06 18:54:14,467 DEBUG [RS:0;jenkins-hbase20:41189] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-06 18:54:14,468 INFO [RS:0;jenkins-hbase20:41189] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-06 18:54:14,468 INFO [RS:0;jenkins-hbase20:41189] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-06 18:54:14,567 DEBUG [jenkins-hbase20:45701] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-06 18:54:14,567 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,41189,1686077654167, state=OPENING 2023-06-06 18:54:14,569 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-06 18:54:14,569 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:54:14,570 INFO [RS:0;jenkins-hbase20:41189] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C41189%2C1686077654167, suffix=, logDir=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167, archiveDir=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/oldWALs, maxLogs=32 2023-06-06 18:54:14,570 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-06 18:54:14,570 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,41189,1686077654167}] 2023-06-06 18:54:14,585 INFO [RS:0;jenkins-hbase20:41189] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.1686077654572 2023-06-06 18:54:14,585 DEBUG [RS:0;jenkins-hbase20:41189] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK], DatanodeInfoWithStorage[127.0.0.1:37031,DS-99ad7fc0-7940-4a82-936a-54815518e387,DISK]] 2023-06-06 18:54:14,727 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,41189,1686077654167 2023-06-06 18:54:14,727 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-06 18:54:14,731 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:34790, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-06 18:54:14,738 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-06 18:54:14,739 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-06 18:54:14,742 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C41189%2C1686077654167.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167, archiveDir=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/oldWALs, maxLogs=32 2023-06-06 18:54:14,758 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.meta.1686077654745.meta 2023-06-06 18:54:14,759 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK], DatanodeInfoWithStorage[127.0.0.1:37031,DS-99ad7fc0-7940-4a82-936a-54815518e387,DISK]] 2023-06-06 18:54:14,759 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-06 18:54:14,759 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-06 18:54:14,759 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-06 18:54:14,760 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-06 18:54:14,760 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-06 18:54:14,760 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:54:14,760 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-06 18:54:14,761 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-06 18:54:14,762 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-06 18:54:14,764 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/hbase/meta/1588230740/info 2023-06-06 18:54:14,764 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/hbase/meta/1588230740/info 2023-06-06 18:54:14,765 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-06 18:54:14,766 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:54:14,766 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-06 18:54:14,767 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/hbase/meta/1588230740/rep_barrier 2023-06-06 18:54:14,767 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/hbase/meta/1588230740/rep_barrier 2023-06-06 18:54:14,768 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-06 18:54:14,769 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:54:14,769 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-06 18:54:14,770 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/hbase/meta/1588230740/table 2023-06-06 18:54:14,771 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/hbase/meta/1588230740/table 2023-06-06 18:54:14,772 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-06 18:54:14,772 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:54:14,774 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/hbase/meta/1588230740 2023-06-06 18:54:14,776 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/hbase/meta/1588230740 2023-06-06 18:54:14,778 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-06 18:54:14,780 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-06 18:54:14,781 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=824236, jitterRate=0.04807136952877045}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-06 18:54:14,782 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-06 18:54:14,784 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1686077654727 2023-06-06 18:54:14,789 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-06 18:54:14,790 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-06 18:54:14,791 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,41189,1686077654167, state=OPEN 2023-06-06 18:54:14,793 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-06 18:54:14,793 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-06 18:54:14,797 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-06 18:54:14,797 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,41189,1686077654167 in 223 msec 2023-06-06 18:54:14,801 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-06 18:54:14,801 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 384 msec 2023-06-06 18:54:14,805 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 455 msec 2023-06-06 18:54:14,805 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1686077654805, completionTime=-1 2023-06-06 18:54:14,805 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-06 18:54:14,806 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-06 18:54:14,809 DEBUG [hconnection-0x5f28ffc7-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-06 18:54:14,812 INFO [RS-EventLoopGroup-6-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:34800, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-06 18:54:14,813 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-06 18:54:14,813 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1686077714813 2023-06-06 18:54:14,813 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1686077774813 2023-06-06 18:54:14,813 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 7 msec 2023-06-06 18:54:14,819 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,45701,1686077654122-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-06 18:54:14,819 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,45701,1686077654122-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-06 18:54:14,819 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,45701,1686077654122-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-06 18:54:14,819 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:45701, period=300000, unit=MILLISECONDS is enabled. 2023-06-06 18:54:14,819 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-06 18:54:14,819 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-06 18:54:14,819 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-06 18:54:14,821 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-06 18:54:14,821 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-06 18:54:14,823 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-06 18:54:14,824 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-06 18:54:14,826 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/.tmp/data/hbase/namespace/55728f232185f026efb1140d168ef73d 2023-06-06 18:54:14,827 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/.tmp/data/hbase/namespace/55728f232185f026efb1140d168ef73d empty. 2023-06-06 18:54:14,828 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/.tmp/data/hbase/namespace/55728f232185f026efb1140d168ef73d 2023-06-06 18:54:14,828 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-06 18:54:14,844 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-06 18:54:14,846 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 55728f232185f026efb1140d168ef73d, NAME => 'hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/.tmp 2023-06-06 18:54:14,856 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:54:14,857 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 55728f232185f026efb1140d168ef73d, disabling compactions & flushes 2023-06-06 18:54:14,857 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d. 2023-06-06 18:54:14,857 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d. 2023-06-06 18:54:14,857 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d. after waiting 0 ms 2023-06-06 18:54:14,857 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d. 2023-06-06 18:54:14,857 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d. 2023-06-06 18:54:14,857 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 55728f232185f026efb1140d168ef73d: 2023-06-06 18:54:14,861 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-06 18:54:14,863 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686077654863"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686077654863"}]},"ts":"1686077654863"} 2023-06-06 18:54:14,866 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-06 18:54:14,868 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-06 18:54:14,868 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686077654868"}]},"ts":"1686077654868"} 2023-06-06 18:54:14,870 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-06 18:54:14,875 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=55728f232185f026efb1140d168ef73d, ASSIGN}] 2023-06-06 18:54:14,877 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=55728f232185f026efb1140d168ef73d, ASSIGN 2023-06-06 18:54:14,879 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=55728f232185f026efb1140d168ef73d, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,41189,1686077654167; forceNewPlan=false, retain=false 2023-06-06 18:54:15,030 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=55728f232185f026efb1140d168ef73d, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,41189,1686077654167 2023-06-06 18:54:15,031 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686077655030"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686077655030"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686077655030"}]},"ts":"1686077655030"} 2023-06-06 18:54:15,036 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 55728f232185f026efb1140d168ef73d, server=jenkins-hbase20.apache.org,41189,1686077654167}] 2023-06-06 18:54:15,199 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d. 2023-06-06 18:54:15,200 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 55728f232185f026efb1140d168ef73d, NAME => 'hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d.', STARTKEY => '', ENDKEY => ''} 2023-06-06 18:54:15,200 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 55728f232185f026efb1140d168ef73d 2023-06-06 18:54:15,200 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:54:15,200 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 55728f232185f026efb1140d168ef73d 2023-06-06 18:54:15,201 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 55728f232185f026efb1140d168ef73d 2023-06-06 18:54:15,203 INFO [StoreOpener-55728f232185f026efb1140d168ef73d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 55728f232185f026efb1140d168ef73d 2023-06-06 18:54:15,204 DEBUG [StoreOpener-55728f232185f026efb1140d168ef73d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/hbase/namespace/55728f232185f026efb1140d168ef73d/info 2023-06-06 18:54:15,204 DEBUG [StoreOpener-55728f232185f026efb1140d168ef73d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/hbase/namespace/55728f232185f026efb1140d168ef73d/info 2023-06-06 18:54:15,205 INFO [StoreOpener-55728f232185f026efb1140d168ef73d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 55728f232185f026efb1140d168ef73d columnFamilyName info 2023-06-06 18:54:15,205 INFO [StoreOpener-55728f232185f026efb1140d168ef73d-1] regionserver.HStore(310): Store=55728f232185f026efb1140d168ef73d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:54:15,207 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/hbase/namespace/55728f232185f026efb1140d168ef73d 2023-06-06 18:54:15,208 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/hbase/namespace/55728f232185f026efb1140d168ef73d 2023-06-06 18:54:15,218 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 55728f232185f026efb1140d168ef73d 2023-06-06 18:54:15,221 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/hbase/namespace/55728f232185f026efb1140d168ef73d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-06 18:54:15,222 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 55728f232185f026efb1140d168ef73d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=706261, jitterRate=-0.10194289684295654}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-06 18:54:15,222 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 55728f232185f026efb1140d168ef73d: 2023-06-06 18:54:15,227 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d., pid=6, masterSystemTime=1686077655190 2023-06-06 18:54:15,230 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d. 2023-06-06 18:54:15,231 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d. 2023-06-06 18:54:15,231 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=55728f232185f026efb1140d168ef73d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,41189,1686077654167 2023-06-06 18:54:15,232 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686077655231"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686077655231"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686077655231"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686077655231"}]},"ts":"1686077655231"} 2023-06-06 18:54:15,237 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-06 18:54:15,237 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 55728f232185f026efb1140d168ef73d, server=jenkins-hbase20.apache.org,41189,1686077654167 in 198 msec 2023-06-06 18:54:15,240 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-06 18:54:15,242 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=55728f232185f026efb1140d168ef73d, ASSIGN in 362 msec 2023-06-06 18:54:15,243 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-06 18:54:15,243 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686077655243"}]},"ts":"1686077655243"} 2023-06-06 18:54:15,245 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-06 18:54:15,248 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-06 18:54:15,251 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 429 msec 2023-06-06 18:54:15,323 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-06 18:54:15,324 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-06 18:54:15,325 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:54:15,335 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-06 18:54:15,345 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-06 18:54:15,350 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 15 msec 2023-06-06 18:54:15,357 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-06 18:54:15,369 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-06 18:54:15,374 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 16 msec 2023-06-06 18:54:15,393 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-06 18:54:15,394 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-06 18:54:15,395 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.185sec 2023-06-06 18:54:15,395 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-06 18:54:15,395 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-06 18:54:15,395 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-06 18:54:15,395 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,45701,1686077654122-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-06 18:54:15,395 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,45701,1686077654122-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-06 18:54:15,398 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-06 18:54:15,486 DEBUG [Listener at localhost.localdomain/33801] zookeeper.ReadOnlyZKClient(139): Connect 0x402f9e8f to 127.0.0.1:62595 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-06 18:54:15,497 DEBUG [Listener at localhost.localdomain/33801] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2544bbf2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-06 18:54:15,500 DEBUG [hconnection-0x72391d27-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-06 18:54:15,503 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:34816, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-06 18:54:15,506 INFO [Listener at localhost.localdomain/33801] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,45701,1686077654122 2023-06-06 18:54:15,507 INFO [Listener at localhost.localdomain/33801] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:54:15,510 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-06 18:54:15,510 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:54:15,511 INFO [Listener at localhost.localdomain/33801] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-06 18:54:15,523 INFO [Listener at localhost.localdomain/33801] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-06-06 18:54:15,523 INFO [Listener at localhost.localdomain/33801] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:54:15,523 INFO [Listener at localhost.localdomain/33801] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-06 18:54:15,523 INFO [Listener at localhost.localdomain/33801] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-06 18:54:15,523 INFO [Listener at localhost.localdomain/33801] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:54:15,524 INFO [Listener at localhost.localdomain/33801] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-06 18:54:15,524 INFO [Listener at localhost.localdomain/33801] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-06 18:54:15,525 INFO [Listener at localhost.localdomain/33801] ipc.NettyRpcServer(120): Bind to /148.251.75.209:36235 2023-06-06 18:54:15,525 INFO [Listener at localhost.localdomain/33801] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-06 18:54:15,526 DEBUG [Listener at localhost.localdomain/33801] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-06 18:54:15,527 INFO [Listener at localhost.localdomain/33801] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:54:15,528 INFO [Listener at localhost.localdomain/33801] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:54:15,528 INFO [Listener at localhost.localdomain/33801] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36235 connecting to ZooKeeper ensemble=127.0.0.1:62595 2023-06-06 18:54:15,531 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): regionserver:362350x0, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-06 18:54:15,533 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36235-0x101c1c55c7a0005 connected 2023-06-06 18:54:15,533 DEBUG [Listener at localhost.localdomain/33801] zookeeper.ZKUtil(162): regionserver:36235-0x101c1c55c7a0005, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-06 18:54:15,534 DEBUG [Listener at localhost.localdomain/33801] zookeeper.ZKUtil(162): regionserver:36235-0x101c1c55c7a0005, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-06-06 18:54:15,534 DEBUG [Listener at localhost.localdomain/33801] zookeeper.ZKUtil(164): regionserver:36235-0x101c1c55c7a0005, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-06 18:54:15,537 DEBUG [Listener at localhost.localdomain/33801] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36235 2023-06-06 18:54:15,538 DEBUG [Listener at localhost.localdomain/33801] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36235 2023-06-06 18:54:15,540 DEBUG [Listener at localhost.localdomain/33801] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36235 2023-06-06 18:54:15,540 DEBUG [Listener at localhost.localdomain/33801] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36235 2023-06-06 18:54:15,541 DEBUG [Listener at localhost.localdomain/33801] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36235 2023-06-06 18:54:15,544 INFO [RS:1;jenkins-hbase20:36235] regionserver.HRegionServer(951): ClusterId : 0cecc49b-7bc7-4ea3-a776-2cbce8608635 2023-06-06 18:54:15,544 DEBUG [RS:1;jenkins-hbase20:36235] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-06 18:54:15,547 DEBUG [RS:1;jenkins-hbase20:36235] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-06 18:54:15,547 DEBUG [RS:1;jenkins-hbase20:36235] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-06 18:54:15,549 DEBUG [RS:1;jenkins-hbase20:36235] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-06 18:54:15,550 DEBUG [RS:1;jenkins-hbase20:36235] zookeeper.ReadOnlyZKClient(139): Connect 0x7ab29bf0 to 127.0.0.1:62595 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-06 18:54:15,560 DEBUG [RS:1;jenkins-hbase20:36235] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@74756ad6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-06 18:54:15,560 DEBUG [RS:1;jenkins-hbase20:36235] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@704af080, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-06 18:54:15,572 DEBUG [RS:1;jenkins-hbase20:36235] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase20:36235 2023-06-06 18:54:15,572 INFO [RS:1;jenkins-hbase20:36235] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-06 18:54:15,573 INFO [RS:1;jenkins-hbase20:36235] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-06 18:54:15,573 DEBUG [RS:1;jenkins-hbase20:36235] regionserver.HRegionServer(1022): About to register with Master. 2023-06-06 18:54:15,573 INFO [RS:1;jenkins-hbase20:36235] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,45701,1686077654122 with isa=jenkins-hbase20.apache.org/148.251.75.209:36235, startcode=1686077655523 2023-06-06 18:54:15,574 DEBUG [RS:1;jenkins-hbase20:36235] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-06 18:54:15,577 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:48269, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-06-06 18:54:15,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45701] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,36235,1686077655523 2023-06-06 18:54:15,578 DEBUG [RS:1;jenkins-hbase20:36235] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5 2023-06-06 18:54:15,578 DEBUG [RS:1;jenkins-hbase20:36235] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:44371 2023-06-06 18:54:15,578 DEBUG [RS:1;jenkins-hbase20:36235] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-06 18:54:15,579 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): regionserver:41189-0x101c1c55c7a0001, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-06 18:54:15,579 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-06 18:54:15,579 DEBUG [RS:1;jenkins-hbase20:36235] zookeeper.ZKUtil(162): regionserver:36235-0x101c1c55c7a0005, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36235,1686077655523 2023-06-06 18:54:15,579 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,36235,1686077655523] 2023-06-06 18:54:15,579 WARN [RS:1;jenkins-hbase20:36235] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-06 18:54:15,580 INFO [RS:1;jenkins-hbase20:36235] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-06 18:54:15,580 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41189-0x101c1c55c7a0001, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,41189,1686077654167 2023-06-06 18:54:15,580 DEBUG [RS:1;jenkins-hbase20:36235] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,36235,1686077655523 2023-06-06 18:54:15,580 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41189-0x101c1c55c7a0001, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36235,1686077655523 2023-06-06 18:54:15,586 DEBUG [RS:1;jenkins-hbase20:36235] zookeeper.ZKUtil(162): regionserver:36235-0x101c1c55c7a0005, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,41189,1686077654167 2023-06-06 18:54:15,587 DEBUG [RS:1;jenkins-hbase20:36235] zookeeper.ZKUtil(162): regionserver:36235-0x101c1c55c7a0005, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36235,1686077655523 2023-06-06 18:54:15,588 DEBUG [RS:1;jenkins-hbase20:36235] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-06 18:54:15,588 INFO [RS:1;jenkins-hbase20:36235] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-06 18:54:15,594 INFO [RS:1;jenkins-hbase20:36235] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-06 18:54:15,595 INFO [RS:1;jenkins-hbase20:36235] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-06 18:54:15,595 INFO [RS:1;jenkins-hbase20:36235] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-06 18:54:15,596 INFO [RS:1;jenkins-hbase20:36235] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-06 18:54:15,597 INFO [RS:1;jenkins-hbase20:36235] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-06 18:54:15,597 DEBUG [RS:1;jenkins-hbase20:36235] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:54:15,597 DEBUG [RS:1;jenkins-hbase20:36235] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:54:15,597 DEBUG [RS:1;jenkins-hbase20:36235] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:54:15,598 DEBUG [RS:1;jenkins-hbase20:36235] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:54:15,598 DEBUG [RS:1;jenkins-hbase20:36235] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:54:15,598 DEBUG [RS:1;jenkins-hbase20:36235] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-06 18:54:15,598 DEBUG [RS:1;jenkins-hbase20:36235] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:54:15,598 DEBUG [RS:1;jenkins-hbase20:36235] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:54:15,598 DEBUG [RS:1;jenkins-hbase20:36235] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:54:15,598 DEBUG [RS:1;jenkins-hbase20:36235] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:54:15,599 INFO [RS:1;jenkins-hbase20:36235] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-06 18:54:15,599 INFO [RS:1;jenkins-hbase20:36235] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-06 18:54:15,599 INFO [RS:1;jenkins-hbase20:36235] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-06 18:54:15,609 INFO [RS:1;jenkins-hbase20:36235] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-06 18:54:15,610 INFO [RS:1;jenkins-hbase20:36235] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36235,1686077655523-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-06 18:54:15,619 INFO [RS:1;jenkins-hbase20:36235] regionserver.Replication(203): jenkins-hbase20.apache.org,36235,1686077655523 started 2023-06-06 18:54:15,619 INFO [RS:1;jenkins-hbase20:36235] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,36235,1686077655523, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:36235, sessionid=0x101c1c55c7a0005 2023-06-06 18:54:15,619 DEBUG [RS:1;jenkins-hbase20:36235] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-06 18:54:15,619 INFO [Listener at localhost.localdomain/33801] hbase.HBaseTestingUtility(3254): Started new server=Thread[RS:1;jenkins-hbase20:36235,5,FailOnTimeoutGroup] 2023-06-06 18:54:15,619 DEBUG [RS:1;jenkins-hbase20:36235] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,36235,1686077655523 2023-06-06 18:54:15,620 INFO [Listener at localhost.localdomain/33801] wal.TestLogRolling(323): Replication=2 2023-06-06 18:54:15,620 DEBUG [RS:1;jenkins-hbase20:36235] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,36235,1686077655523' 2023-06-06 18:54:15,621 DEBUG [RS:1;jenkins-hbase20:36235] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-06 18:54:15,621 DEBUG [RS:1;jenkins-hbase20:36235] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-06 18:54:15,623 DEBUG [Listener at localhost.localdomain/33801] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-06 18:54:15,623 DEBUG [RS:1;jenkins-hbase20:36235] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-06 18:54:15,623 DEBUG [RS:1;jenkins-hbase20:36235] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-06 18:54:15,623 DEBUG [RS:1;jenkins-hbase20:36235] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,36235,1686077655523 2023-06-06 18:54:15,623 DEBUG [RS:1;jenkins-hbase20:36235] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,36235,1686077655523' 2023-06-06 18:54:15,623 DEBUG [RS:1;jenkins-hbase20:36235] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-06 18:54:15,624 DEBUG [RS:1;jenkins-hbase20:36235] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-06 18:54:15,624 DEBUG [RS:1;jenkins-hbase20:36235] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-06 18:54:15,625 INFO [RS:1;jenkins-hbase20:36235] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-06 18:54:15,625 INFO [RS:1;jenkins-hbase20:36235] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-06 18:54:15,626 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:53090, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-06 18:54:15,628 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45701] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-06 18:54:15,628 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45701] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-06 18:54:15,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45701] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-06 18:54:15,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45701] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath 2023-06-06 18:54:15,632 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_PRE_OPERATION 2023-06-06 18:54:15,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45701] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnDatanodeDeath" procId is: 9 2023-06-06 18:54:15,633 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-06 18:54:15,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45701] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-06 18:54:15,635 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a8ab02613e1a565df840dc6f149757b 2023-06-06 18:54:15,635 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a8ab02613e1a565df840dc6f149757b empty. 2023-06-06 18:54:15,636 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a8ab02613e1a565df840dc6f149757b 2023-06-06 18:54:15,636 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnDatanodeDeath regions 2023-06-06 18:54:15,652 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/.tabledesc/.tableinfo.0000000001 2023-06-06 18:54:15,653 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0a8ab02613e1a565df840dc6f149757b, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1686077655628.0a8ab02613e1a565df840dc6f149757b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/.tmp 2023-06-06 18:54:15,667 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1686077655628.0a8ab02613e1a565df840dc6f149757b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:54:15,667 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1604): Closing 0a8ab02613e1a565df840dc6f149757b, disabling compactions & flushes 2023-06-06 18:54:15,667 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1686077655628.0a8ab02613e1a565df840dc6f149757b. 2023-06-06 18:54:15,667 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1686077655628.0a8ab02613e1a565df840dc6f149757b. 2023-06-06 18:54:15,667 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1686077655628.0a8ab02613e1a565df840dc6f149757b. after waiting 0 ms 2023-06-06 18:54:15,667 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1686077655628.0a8ab02613e1a565df840dc6f149757b. 2023-06-06 18:54:15,667 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1686077655628.0a8ab02613e1a565df840dc6f149757b. 2023-06-06 18:54:15,667 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1558): Region close journal for 0a8ab02613e1a565df840dc6f149757b: 2023-06-06 18:54:15,670 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ADD_TO_META 2023-06-06 18:54:15,671 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1686077655628.0a8ab02613e1a565df840dc6f149757b.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1686077655671"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686077655671"}]},"ts":"1686077655671"} 2023-06-06 18:54:15,673 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-06 18:54:15,674 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-06 18:54:15,675 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686077655674"}]},"ts":"1686077655674"} 2023-06-06 18:54:15,676 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLING in hbase:meta 2023-06-06 18:54:15,683 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-06-06 18:54:15,685 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-06-06 18:54:15,685 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-06-06 18:54:15,685 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-06-06 18:54:15,685 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=0a8ab02613e1a565df840dc6f149757b, ASSIGN}] 2023-06-06 18:54:15,688 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=0a8ab02613e1a565df840dc6f149757b, ASSIGN 2023-06-06 18:54:15,689 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=0a8ab02613e1a565df840dc6f149757b, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,41189,1686077654167; forceNewPlan=false, retain=false 2023-06-06 18:54:15,730 INFO [RS:1;jenkins-hbase20:36235] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C36235%2C1686077655523, suffix=, logDir=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,36235,1686077655523, archiveDir=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/oldWALs, maxLogs=32 2023-06-06 18:54:15,750 INFO [RS:1;jenkins-hbase20:36235] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,36235,1686077655523/jenkins-hbase20.apache.org%2C36235%2C1686077655523.1686077655733 2023-06-06 18:54:15,750 DEBUG [RS:1;jenkins-hbase20:36235] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37031,DS-99ad7fc0-7940-4a82-936a-54815518e387,DISK], DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK]] 2023-06-06 18:54:15,842 INFO [jenkins-hbase20:45701] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-06-06 18:54:15,844 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=0a8ab02613e1a565df840dc6f149757b, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,41189,1686077654167 2023-06-06 18:54:15,845 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1686077655628.0a8ab02613e1a565df840dc6f149757b.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1686077655844"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686077655844"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686077655844"}]},"ts":"1686077655844"} 2023-06-06 18:54:15,849 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 0a8ab02613e1a565df840dc6f149757b, server=jenkins-hbase20.apache.org,41189,1686077654167}] 2023-06-06 18:54:16,016 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnDatanodeDeath,,1686077655628.0a8ab02613e1a565df840dc6f149757b. 2023-06-06 18:54:16,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0a8ab02613e1a565df840dc6f149757b, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1686077655628.0a8ab02613e1a565df840dc6f149757b.', STARTKEY => '', ENDKEY => ''} 2023-06-06 18:54:16,018 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnDatanodeDeath 0a8ab02613e1a565df840dc6f149757b 2023-06-06 18:54:16,018 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1686077655628.0a8ab02613e1a565df840dc6f149757b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:54:16,018 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 0a8ab02613e1a565df840dc6f149757b 2023-06-06 18:54:16,018 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 0a8ab02613e1a565df840dc6f149757b 2023-06-06 18:54:16,020 INFO [StoreOpener-0a8ab02613e1a565df840dc6f149757b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 0a8ab02613e1a565df840dc6f149757b 2023-06-06 18:54:16,022 DEBUG [StoreOpener-0a8ab02613e1a565df840dc6f149757b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a8ab02613e1a565df840dc6f149757b/info 2023-06-06 18:54:16,022 DEBUG [StoreOpener-0a8ab02613e1a565df840dc6f149757b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a8ab02613e1a565df840dc6f149757b/info 2023-06-06 18:54:16,022 INFO [StoreOpener-0a8ab02613e1a565df840dc6f149757b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0a8ab02613e1a565df840dc6f149757b columnFamilyName info 2023-06-06 18:54:16,023 INFO [StoreOpener-0a8ab02613e1a565df840dc6f149757b-1] regionserver.HStore(310): Store=0a8ab02613e1a565df840dc6f149757b/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:54:16,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a8ab02613e1a565df840dc6f149757b 2023-06-06 18:54:16,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a8ab02613e1a565df840dc6f149757b 2023-06-06 18:54:16,029 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 0a8ab02613e1a565df840dc6f149757b 2023-06-06 18:54:16,033 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a8ab02613e1a565df840dc6f149757b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-06 18:54:16,033 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 0a8ab02613e1a565df840dc6f149757b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=728554, jitterRate=-0.07359600067138672}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-06 18:54:16,034 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 0a8ab02613e1a565df840dc6f149757b: 2023-06-06 18:54:16,035 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnDatanodeDeath,,1686077655628.0a8ab02613e1a565df840dc6f149757b., pid=11, masterSystemTime=1686077656003 2023-06-06 18:54:16,037 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnDatanodeDeath,,1686077655628.0a8ab02613e1a565df840dc6f149757b. 2023-06-06 18:54:16,037 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnDatanodeDeath,,1686077655628.0a8ab02613e1a565df840dc6f149757b. 2023-06-06 18:54:16,039 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=0a8ab02613e1a565df840dc6f149757b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,41189,1686077654167 2023-06-06 18:54:16,039 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1686077655628.0a8ab02613e1a565df840dc6f149757b.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1686077656038"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686077656038"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686077656038"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686077656038"}]},"ts":"1686077656038"} 2023-06-06 18:54:16,046 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-06 18:54:16,046 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 0a8ab02613e1a565df840dc6f149757b, server=jenkins-hbase20.apache.org,41189,1686077654167 in 193 msec 2023-06-06 18:54:16,050 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-06 18:54:16,050 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=0a8ab02613e1a565df840dc6f149757b, ASSIGN in 361 msec 2023-06-06 18:54:16,051 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-06 18:54:16,052 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686077656051"}]},"ts":"1686077656051"} 2023-06-06 18:54:16,053 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLED in hbase:meta 2023-06-06 18:54:16,055 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_POST_OPERATION 2023-06-06 18:54:16,057 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath in 427 msec 2023-06-06 18:54:18,359 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-06 18:54:20,429 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-06 18:54:20,430 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-06 18:54:20,430 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnDatanodeDeath' 2023-06-06 18:54:25,636 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45701] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-06 18:54:25,637 INFO [Listener at localhost.localdomain/33801] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnDatanodeDeath, procId: 9 completed 2023-06-06 18:54:25,644 DEBUG [Listener at localhost.localdomain/33801] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnDatanodeDeath 2023-06-06 18:54:25,645 DEBUG [Listener at localhost.localdomain/33801] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnDatanodeDeath,,1686077655628.0a8ab02613e1a565df840dc6f149757b. 2023-06-06 18:54:25,662 WARN [Listener at localhost.localdomain/33801] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-06 18:54:25,665 WARN [Listener at localhost.localdomain/33801] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:54:25,667 INFO [Listener at localhost.localdomain/33801] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:54:25,674 INFO [Listener at localhost.localdomain/33801] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/java.io.tmpdir/Jetty_localhost_45307_datanode____br41q8/webapp 2023-06-06 18:54:25,751 INFO [Listener at localhost.localdomain/33801] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45307 2023-06-06 18:54:25,761 WARN [Listener at localhost.localdomain/35907] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:54:25,782 WARN [Listener at localhost.localdomain/35907] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-06 18:54:25,787 WARN [Listener at localhost.localdomain/35907] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:54:25,789 INFO [Listener at localhost.localdomain/35907] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:54:25,804 INFO [Listener at localhost.localdomain/35907] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/java.io.tmpdir/Jetty_localhost_35011_datanode____6cxrpx/webapp 2023-06-06 18:54:25,853 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xad7af6288b2e0709: Processing first storage report for DS-0144582b-86cf-4961-a8ca-468d58cee2ef from datanode 1d09127b-2110-4804-b821-26a66a6aa8b2 2023-06-06 18:54:25,853 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xad7af6288b2e0709: from storage DS-0144582b-86cf-4961-a8ca-468d58cee2ef node DatanodeRegistration(127.0.0.1:34465, datanodeUuid=1d09127b-2110-4804-b821-26a66a6aa8b2, infoPort=43713, infoSecurePort=0, ipcPort=35907, storageInfo=lv=-57;cid=testClusterID;nsid=1580867112;c=1686077653604), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-06 18:54:25,853 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xad7af6288b2e0709: Processing first storage report for DS-52fbf507-d2bb-42eb-bcee-d3fe8a784800 from datanode 1d09127b-2110-4804-b821-26a66a6aa8b2 2023-06-06 18:54:25,853 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xad7af6288b2e0709: from storage DS-52fbf507-d2bb-42eb-bcee-d3fe8a784800 node DatanodeRegistration(127.0.0.1:34465, datanodeUuid=1d09127b-2110-4804-b821-26a66a6aa8b2, infoPort=43713, infoSecurePort=0, ipcPort=35907, storageInfo=lv=-57;cid=testClusterID;nsid=1580867112;c=1686077653604), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:54:25,903 INFO [Listener at localhost.localdomain/35907] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35011 2023-06-06 18:54:25,912 WARN [Listener at localhost.localdomain/37995] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:54:26,008 WARN [Listener at localhost.localdomain/37995] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-06 18:54:26,013 WARN [Listener at localhost.localdomain/37995] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:54:26,014 INFO [Listener at localhost.localdomain/37995] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:54:26,018 INFO [Listener at localhost.localdomain/37995] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/java.io.tmpdir/Jetty_localhost_33751_datanode____m9jnyy/webapp 2023-06-06 18:54:26,052 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe644759623bf5176: Processing first storage report for DS-36673a27-2940-4165-8eb4-a83fa09224fc from datanode edbdc61e-9b1b-46f9-a9a2-0a9f3fff4ce3 2023-06-06 18:54:26,052 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe644759623bf5176: from storage DS-36673a27-2940-4165-8eb4-a83fa09224fc node DatanodeRegistration(127.0.0.1:41141, datanodeUuid=edbdc61e-9b1b-46f9-a9a2-0a9f3fff4ce3, infoPort=38011, infoSecurePort=0, ipcPort=37995, storageInfo=lv=-57;cid=testClusterID;nsid=1580867112;c=1686077653604), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:54:26,052 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe644759623bf5176: Processing first storage report for DS-e80d7299-86f1-44d1-a855-a46be8af8abb from datanode edbdc61e-9b1b-46f9-a9a2-0a9f3fff4ce3 2023-06-06 18:54:26,052 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe644759623bf5176: from storage DS-e80d7299-86f1-44d1-a855-a46be8af8abb node DatanodeRegistration(127.0.0.1:41141, datanodeUuid=edbdc61e-9b1b-46f9-a9a2-0a9f3fff4ce3, infoPort=38011, infoSecurePort=0, ipcPort=37995, storageInfo=lv=-57;cid=testClusterID;nsid=1580867112;c=1686077653604), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-06 18:54:26,102 INFO [Listener at localhost.localdomain/37995] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33751 2023-06-06 18:54:26,112 WARN [Listener at localhost.localdomain/42037] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:54:26,193 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x97a8baee309c2060: Processing first storage report for DS-943bd8cf-1fd1-4d6f-8a71-8c7611b8ddbe from datanode 651c49c0-a949-4a2a-86b6-f4d5b3094211 2023-06-06 18:54:26,193 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x97a8baee309c2060: from storage DS-943bd8cf-1fd1-4d6f-8a71-8c7611b8ddbe node DatanodeRegistration(127.0.0.1:42237, datanodeUuid=651c49c0-a949-4a2a-86b6-f4d5b3094211, infoPort=35451, infoSecurePort=0, ipcPort=42037, storageInfo=lv=-57;cid=testClusterID;nsid=1580867112;c=1686077653604), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:54:26,193 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x97a8baee309c2060: Processing first storage report for DS-7fae03e7-c922-4547-bd9b-794a142a8d6b from datanode 651c49c0-a949-4a2a-86b6-f4d5b3094211 2023-06-06 18:54:26,194 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x97a8baee309c2060: from storage DS-7fae03e7-c922-4547-bd9b-794a142a8d6b node DatanodeRegistration(127.0.0.1:42237, datanodeUuid=651c49c0-a949-4a2a-86b6-f4d5b3094211, infoPort=35451, infoSecurePort=0, ipcPort=42037, storageInfo=lv=-57;cid=testClusterID;nsid=1580867112;c=1686077653604), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:54:26,221 WARN [Listener at localhost.localdomain/42037] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-06 18:54:26,223 WARN [ResponseProcessor for block BP-547967001-148.251.75.209-1686077653604:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-547967001-148.251.75.209-1686077653604:blk_1073741829_1005 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-06 18:54:26,224 WARN [DataStreamer for file /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/MasterData/WALs/jenkins-hbase20.apache.org,45701,1686077654122/jenkins-hbase20.apache.org%2C45701%2C1686077654122.1686077654285 block BP-547967001-148.251.75.209-1686077653604:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-547967001-148.251.75.209-1686077653604:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:37031,DS-99ad7fc0-7940-4a82-936a-54815518e387,DISK], DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:37031,DS-99ad7fc0-7940-4a82-936a-54815518e387,DISK]) is bad. 2023-06-06 18:54:26,224 WARN [ResponseProcessor for block BP-547967001-148.251.75.209-1686077653604:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-547967001-148.251.75.209-1686077653604:blk_1073741832_1008 java.io.IOException: Bad response ERROR for BP-547967001-148.251.75.209-1686077653604:blk_1073741832_1008 from datanode DatanodeInfoWithStorage[127.0.0.1:37031,DS-99ad7fc0-7940-4a82-936a-54815518e387,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-06 18:54:26,224 WARN [ResponseProcessor for block BP-547967001-148.251.75.209-1686077653604:blk_1073741838_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-547967001-148.251.75.209-1686077653604:blk_1073741838_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-06 18:54:26,226 WARN [ResponseProcessor for block BP-547967001-148.251.75.209-1686077653604:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-547967001-148.251.75.209-1686077653604:blk_1073741833_1009 java.io.IOException: Bad response ERROR for BP-547967001-148.251.75.209-1686077653604:blk_1073741833_1009 from datanode DatanodeInfoWithStorage[127.0.0.1:37031,DS-99ad7fc0-7940-4a82-936a-54815518e387,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-06 18:54:26,226 WARN [DataStreamer for file /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.meta.1686077654745.meta block BP-547967001-148.251.75.209-1686077653604:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-547967001-148.251.75.209-1686077653604:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK], DatanodeInfoWithStorage[127.0.0.1:37031,DS-99ad7fc0-7940-4a82-936a-54815518e387,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:37031,DS-99ad7fc0-7940-4a82-936a-54815518e387,DISK]) is bad. 2023-06-06 18:54:26,228 WARN [DataStreamer for file /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.1686077654572 block BP-547967001-148.251.75.209-1686077653604:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-547967001-148.251.75.209-1686077653604:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK], DatanodeInfoWithStorage[127.0.0.1:37031,DS-99ad7fc0-7940-4a82-936a-54815518e387,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:37031,DS-99ad7fc0-7940-4a82-936a-54815518e387,DISK]) is bad. 2023-06-06 18:54:26,228 WARN [PacketResponder: BP-547967001-148.251.75.209-1686077653604:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:37031]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:26,236 WARN [DataStreamer for file /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,36235,1686077655523/jenkins-hbase20.apache.org%2C36235%2C1686077655523.1686077655733 block BP-547967001-148.251.75.209-1686077653604:blk_1073741838_1014] hdfs.DataStreamer(1548): Error Recovery for BP-547967001-148.251.75.209-1686077653604:blk_1073741838_1014 in pipeline [DatanodeInfoWithStorage[127.0.0.1:37031,DS-99ad7fc0-7940-4a82-936a-54815518e387,DISK], DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:37031,DS-99ad7fc0-7940-4a82-936a-54815518e387,DISK]) is bad. 2023-06-06 18:54:26,239 WARN [PacketResponder: BP-547967001-148.251.75.209-1686077653604:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:37031]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:26,248 INFO [Listener at localhost.localdomain/42037] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-06 18:54:26,248 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1805093127_17 at /127.0.0.1:34740 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:43943:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34740 dst: /127.0.0.1:43943 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:26,248 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1805093127_17 at /127.0.0.1:34742 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:43943:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34742 dst: /127.0.0.1:43943 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:197) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:26,250 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1717786921_17 at /127.0.0.1:34798 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:43943:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34798 dst: /127.0.0.1:43943 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:43943 remote=/127.0.0.1:34798]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:26,251 WARN [PacketResponder: BP-547967001-148.251.75.209-1686077653604:blk_1073741838_1014, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:43943]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:26,251 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_666690949_17 at /127.0.0.1:34710 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:43943:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34710 dst: /127.0.0.1:43943 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:43943 remote=/127.0.0.1:34710]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:26,257 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1717786921_17 at /127.0.0.1:47222 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:37031:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47222 dst: /127.0.0.1:37031 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:26,257 WARN [PacketResponder: BP-547967001-148.251.75.209-1686077653604:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:43943]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:26,260 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_666690949_17 at /127.0.0.1:47156 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:37031:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47156 dst: /127.0.0.1:37031 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:26,352 WARN [BP-547967001-148.251.75.209-1686077653604 heartbeating to localhost.localdomain/127.0.0.1:44371] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-06 18:54:26,352 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1805093127_17 at /127.0.0.1:47194 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:37031:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47194 dst: /127.0.0.1:37031 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:26,353 WARN [BP-547967001-148.251.75.209-1686077653604 heartbeating to localhost.localdomain/127.0.0.1:44371] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-547967001-148.251.75.209-1686077653604 (Datanode Uuid 3664d1ed-04b7-4984-8af6-3273fcedd7ee) service to localhost.localdomain/127.0.0.1:44371 2023-06-06 18:54:26,352 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1805093127_17 at /127.0.0.1:47178 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:37031:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47178 dst: /127.0.0.1:37031 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:26,355 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data3/current/BP-547967001-148.251.75.209-1686077653604] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:54:26,355 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data4/current/BP-547967001-148.251.75.209-1686077653604] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:54:26,357 WARN [Listener at localhost.localdomain/42037] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-06 18:54:26,357 WARN [ResponseProcessor for block BP-547967001-148.251.75.209-1686077653604:blk_1073741829_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-547967001-148.251.75.209-1686077653604:blk_1073741829_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-06 18:54:26,358 WARN [ResponseProcessor for block BP-547967001-148.251.75.209-1686077653604:blk_1073741838_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-547967001-148.251.75.209-1686077653604:blk_1073741838_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-06 18:54:26,358 WARN [ResponseProcessor for block BP-547967001-148.251.75.209-1686077653604:blk_1073741832_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-547967001-148.251.75.209-1686077653604:blk_1073741832_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-06 18:54:26,358 WARN [ResponseProcessor for block BP-547967001-148.251.75.209-1686077653604:blk_1073741833_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-547967001-148.251.75.209-1686077653604:blk_1073741833_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-06 18:54:26,363 INFO [Listener at localhost.localdomain/42037] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-06 18:54:26,466 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1805093127_17 at /127.0.0.1:41410 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:43943:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:41410 dst: /127.0.0.1:43943 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:26,467 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1805093127_17 at /127.0.0.1:41386 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:43943:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:41386 dst: /127.0.0.1:43943 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:26,467 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1717786921_17 at /127.0.0.1:41424 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:43943:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:41424 dst: /127.0.0.1:43943 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:26,466 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_666690949_17 at /127.0.0.1:41400 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:43943:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:41400 dst: /127.0.0.1:43943 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:26,468 WARN [BP-547967001-148.251.75.209-1686077653604 heartbeating to localhost.localdomain/127.0.0.1:44371] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-06 18:54:26,470 WARN [BP-547967001-148.251.75.209-1686077653604 heartbeating to localhost.localdomain/127.0.0.1:44371] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-547967001-148.251.75.209-1686077653604 (Datanode Uuid 1976133d-36b0-4957-81f6-fd8b4c98e8bf) service to localhost.localdomain/127.0.0.1:44371 2023-06-06 18:54:26,470 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data1/current/BP-547967001-148.251.75.209-1686077653604] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:54:26,472 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data2/current/BP-547967001-148.251.75.209-1686077653604] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:54:26,478 WARN [RS:0;jenkins-hbase20:41189.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=4, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:54:26,478 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C41189%2C1686077654167:(num 1686077654572) roll requested 2023-06-06 18:54:26,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41189] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:54:26,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41189] ipc.CallRunner(144): callId: 9 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:34816 deadline: 1686077676476, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-06-06 18:54:26,488 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-06-06 18:54:26,488 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.1686077654572 with entries=4, filesize=985 B; new WAL /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.1686077666478 2023-06-06 18:54:26,489 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34465,DS-0144582b-86cf-4961-a8ca-468d58cee2ef,DISK], DatanodeInfoWithStorage[127.0.0.1:41141,DS-36673a27-2940-4165-8eb4-a83fa09224fc,DISK]] 2023-06-06 18:54:26,489 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:54:26,489 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.1686077654572 is not closed yet, will try archiving it next time 2023-06-06 18:54:26,489 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.1686077654572; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:54:38,568 INFO [Listener at localhost.localdomain/42037] wal.TestLogRolling(375): log.getCurrentFileName(): hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.1686077666478 2023-06-06 18:54:38,569 WARN [Listener at localhost.localdomain/42037] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-06 18:54:38,571 WARN [ResponseProcessor for block BP-547967001-148.251.75.209-1686077653604:blk_1073741839_1019] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-547967001-148.251.75.209-1686077653604:blk_1073741839_1019 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-06 18:54:38,572 WARN [DataStreamer for file /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.1686077666478 block BP-547967001-148.251.75.209-1686077653604:blk_1073741839_1019] hdfs.DataStreamer(1548): Error Recovery for BP-547967001-148.251.75.209-1686077653604:blk_1073741839_1019 in pipeline [DatanodeInfoWithStorage[127.0.0.1:34465,DS-0144582b-86cf-4961-a8ca-468d58cee2ef,DISK], DatanodeInfoWithStorage[127.0.0.1:41141,DS-36673a27-2940-4165-8eb4-a83fa09224fc,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:34465,DS-0144582b-86cf-4961-a8ca-468d58cee2ef,DISK]) is bad. 2023-06-06 18:54:38,579 INFO [Listener at localhost.localdomain/42037] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-06 18:54:38,582 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1805093127_17 at /127.0.0.1:42750 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741839_1019]] datanode.DataXceiver(323): 127.0.0.1:41141:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42750 dst: /127.0.0.1:41141 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:41141 remote=/127.0.0.1:42750]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:38,583 WARN [PacketResponder: BP-547967001-148.251.75.209-1686077653604:blk_1073741839_1019, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:41141]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:38,584 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1805093127_17 at /127.0.0.1:40528 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741839_1019]] datanode.DataXceiver(323): 127.0.0.1:34465:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:40528 dst: /127.0.0.1:34465 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:38,684 WARN [BP-547967001-148.251.75.209-1686077653604 heartbeating to localhost.localdomain/127.0.0.1:44371] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-06 18:54:38,684 WARN [BP-547967001-148.251.75.209-1686077653604 heartbeating to localhost.localdomain/127.0.0.1:44371] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-547967001-148.251.75.209-1686077653604 (Datanode Uuid 1d09127b-2110-4804-b821-26a66a6aa8b2) service to localhost.localdomain/127.0.0.1:44371 2023-06-06 18:54:38,684 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data5/current/BP-547967001-148.251.75.209-1686077653604] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:54:38,684 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data6/current/BP-547967001-148.251.75.209-1686077653604] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:54:38,690 WARN [sync.1] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41141,DS-36673a27-2940-4165-8eb4-a83fa09224fc,DISK]] 2023-06-06 18:54:38,690 WARN [sync.1] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41141,DS-36673a27-2940-4165-8eb4-a83fa09224fc,DISK]] 2023-06-06 18:54:38,690 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C41189%2C1686077654167:(num 1686077666478) roll requested 2023-06-06 18:54:38,695 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1805093127_17 at /127.0.0.1:36344 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741840_1021]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data8/current]'}, localName='127.0.0.1:41141', datanodeUuid='edbdc61e-9b1b-46f9-a9a2-0a9f3fff4ce3', xmitsInProgress=0}:Exception transfering block BP-547967001-148.251.75.209-1686077653604:blk_1073741840_1021 to mirror 127.0.0.1:34465: java.net.ConnectException: Connection refused 2023-06-06 18:54:38,695 WARN [Thread-638] hdfs.DataStreamer(1658): Abandoning BP-547967001-148.251.75.209-1686077653604:blk_1073741840_1021 2023-06-06 18:54:38,695 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1805093127_17 at /127.0.0.1:36344 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741840_1021]] datanode.DataXceiver(323): 127.0.0.1:41141:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:36344 dst: /127.0.0.1:41141 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:38,698 WARN [Thread-638] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34465,DS-0144582b-86cf-4961-a8ca-468d58cee2ef,DISK] 2023-06-06 18:54:38,702 WARN [Thread-638] hdfs.DataStreamer(1658): Abandoning BP-547967001-148.251.75.209-1686077653604:blk_1073741841_1022 2023-06-06 18:54:38,703 WARN [Thread-638] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK] 2023-06-06 18:54:38,714 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.1686077666478 with entries=2, filesize=2.36 KB; new WAL /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.1686077678690 2023-06-06 18:54:38,714 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41141,DS-36673a27-2940-4165-8eb4-a83fa09224fc,DISK], DatanodeInfoWithStorage[127.0.0.1:42237,DS-943bd8cf-1fd1-4d6f-8a71-8c7611b8ddbe,DISK]] 2023-06-06 18:54:38,714 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.1686077666478 is not closed yet, will try archiving it next time 2023-06-06 18:54:41,070 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@1857b442] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:41141, datanodeUuid=edbdc61e-9b1b-46f9-a9a2-0a9f3fff4ce3, infoPort=38011, infoSecurePort=0, ipcPort=37995, storageInfo=lv=-57;cid=testClusterID;nsid=1580867112;c=1686077653604):Failed to transfer BP-547967001-148.251.75.209-1686077653604:blk_1073741839_1020 to 127.0.0.1:37031 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:42,694 WARN [Listener at localhost.localdomain/42037] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-06 18:54:42,696 WARN [ResponseProcessor for block BP-547967001-148.251.75.209-1686077653604:blk_1073741842_1023] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-547967001-148.251.75.209-1686077653604:blk_1073741842_1023 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-06 18:54:42,696 WARN [DataStreamer for file /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.1686077678690 block BP-547967001-148.251.75.209-1686077653604:blk_1073741842_1023] hdfs.DataStreamer(1548): Error Recovery for BP-547967001-148.251.75.209-1686077653604:blk_1073741842_1023 in pipeline [DatanodeInfoWithStorage[127.0.0.1:41141,DS-36673a27-2940-4165-8eb4-a83fa09224fc,DISK], DatanodeInfoWithStorage[127.0.0.1:42237,DS-943bd8cf-1fd1-4d6f-8a71-8c7611b8ddbe,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:41141,DS-36673a27-2940-4165-8eb4-a83fa09224fc,DISK]) is bad. 2023-06-06 18:54:42,700 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1805093127_17 at /127.0.0.1:52728 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741842_1023]] datanode.DataXceiver(323): 127.0.0.1:42237:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52728 dst: /127.0.0.1:42237 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:42237 remote=/127.0.0.1:52728]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:42,701 WARN [PacketResponder: BP-547967001-148.251.75.209-1686077653604:blk_1073741842_1023, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:42237]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:42,702 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1805093127_17 at /127.0.0.1:36358 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741842_1023]] datanode.DataXceiver(323): 127.0.0.1:41141:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:36358 dst: /127.0.0.1:41141 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:42,702 INFO [Listener at localhost.localdomain/42037] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-06 18:54:42,809 WARN [BP-547967001-148.251.75.209-1686077653604 heartbeating to localhost.localdomain/127.0.0.1:44371] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-06 18:54:42,809 WARN [BP-547967001-148.251.75.209-1686077653604 heartbeating to localhost.localdomain/127.0.0.1:44371] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-547967001-148.251.75.209-1686077653604 (Datanode Uuid edbdc61e-9b1b-46f9-a9a2-0a9f3fff4ce3) service to localhost.localdomain/127.0.0.1:44371 2023-06-06 18:54:42,810 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data7/current/BP-547967001-148.251.75.209-1686077653604] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:54:42,810 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data8/current/BP-547967001-148.251.75.209-1686077653604] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:54:42,815 WARN [sync.4] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42237,DS-943bd8cf-1fd1-4d6f-8a71-8c7611b8ddbe,DISK]] 2023-06-06 18:54:42,816 WARN [sync.4] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42237,DS-943bd8cf-1fd1-4d6f-8a71-8c7611b8ddbe,DISK]] 2023-06-06 18:54:42,816 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C41189%2C1686077654167:(num 1686077678690) roll requested 2023-06-06 18:54:42,821 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1805093127_17 at /127.0.0.1:32992 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741843_1025]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data10/current]'}, localName='127.0.0.1:42237', datanodeUuid='651c49c0-a949-4a2a-86b6-f4d5b3094211', xmitsInProgress=0}:Exception transfering block BP-547967001-148.251.75.209-1686077653604:blk_1073741843_1025 to mirror 127.0.0.1:41141: java.net.ConnectException: Connection refused 2023-06-06 18:54:42,822 WARN [Thread-650] hdfs.DataStreamer(1658): Abandoning BP-547967001-148.251.75.209-1686077653604:blk_1073741843_1025 2023-06-06 18:54:42,822 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1805093127_17 at /127.0.0.1:32992 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741843_1025]] datanode.DataXceiver(323): 127.0.0.1:42237:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:32992 dst: /127.0.0.1:42237 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:42,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41189] regionserver.HRegion(9158): Flush requested on 0a8ab02613e1a565df840dc6f149757b 2023-06-06 18:54:42,823 WARN [Thread-650] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41141,DS-36673a27-2940-4165-8eb4-a83fa09224fc,DISK] 2023-06-06 18:54:42,823 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 0a8ab02613e1a565df840dc6f149757b 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-06 18:54:42,825 WARN [Thread-650] hdfs.DataStreamer(1658): Abandoning BP-547967001-148.251.75.209-1686077653604:blk_1073741844_1026 2023-06-06 18:54:42,826 WARN [Thread-650] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK] 2023-06-06 18:54:42,829 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1805093127_17 at /127.0.0.1:32998 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741845_1027]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data10/current]'}, localName='127.0.0.1:42237', datanodeUuid='651c49c0-a949-4a2a-86b6-f4d5b3094211', xmitsInProgress=0}:Exception transfering block BP-547967001-148.251.75.209-1686077653604:blk_1073741845_1027 to mirror 127.0.0.1:37031: java.net.ConnectException: Connection refused 2023-06-06 18:54:42,829 WARN [Thread-650] hdfs.DataStreamer(1658): Abandoning BP-547967001-148.251.75.209-1686077653604:blk_1073741845_1027 2023-06-06 18:54:42,829 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1805093127_17 at /127.0.0.1:32998 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741845_1027]] datanode.DataXceiver(323): 127.0.0.1:42237:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:32998 dst: /127.0.0.1:42237 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:42,830 WARN [Thread-650] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37031,DS-99ad7fc0-7940-4a82-936a-54815518e387,DISK] 2023-06-06 18:54:42,832 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1805093127_17 at /127.0.0.1:33008 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741847_1029]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data10/current]'}, localName='127.0.0.1:42237', datanodeUuid='651c49c0-a949-4a2a-86b6-f4d5b3094211', xmitsInProgress=0}:Exception transfering block BP-547967001-148.251.75.209-1686077653604:blk_1073741847_1029 to mirror 127.0.0.1:34465: java.net.ConnectException: Connection refused 2023-06-06 18:54:42,832 WARN [Thread-650] hdfs.DataStreamer(1658): Abandoning BP-547967001-148.251.75.209-1686077653604:blk_1073741847_1029 2023-06-06 18:54:42,833 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1805093127_17 at /127.0.0.1:33008 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741847_1029]] datanode.DataXceiver(323): 127.0.0.1:42237:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33008 dst: /127.0.0.1:42237 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:42,834 WARN [Thread-650] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34465,DS-0144582b-86cf-4961-a8ca-468d58cee2ef,DISK] 2023-06-06 18:54:42,834 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-547967001-148.251.75.209-1686077653604:blk_1073741846_1028 2023-06-06 18:54:42,835 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37031,DS-99ad7fc0-7940-4a82-936a-54815518e387,DISK] 2023-06-06 18:54:42,835 WARN [IPC Server handler 4 on default port 44371] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-06-06 18:54:42,835 WARN [IPC Server handler 4 on default port 44371] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-06-06 18:54:42,835 WARN [IPC Server handler 4 on default port 44371] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-06-06 18:54:42,836 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-547967001-148.251.75.209-1686077653604:blk_1073741849_1031 2023-06-06 18:54:42,837 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41141,DS-36673a27-2940-4165-8eb4-a83fa09224fc,DISK] 2023-06-06 18:54:42,839 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-547967001-148.251.75.209-1686077653604:blk_1073741850_1032 2023-06-06 18:54:42,839 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34465,DS-0144582b-86cf-4961-a8ca-468d58cee2ef,DISK] 2023-06-06 18:54:42,839 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.1686077678690 with entries=13, filesize=14.09 KB; new WAL /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.1686077682816 2023-06-06 18:54:42,839 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42237,DS-943bd8cf-1fd1-4d6f-8a71-8c7611b8ddbe,DISK]] 2023-06-06 18:54:42,840 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.1686077678690 is not closed yet, will try archiving it next time 2023-06-06 18:54:42,843 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1805093127_17 at /127.0.0.1:33028 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741851_1033]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data10/current]'}, localName='127.0.0.1:42237', datanodeUuid='651c49c0-a949-4a2a-86b6-f4d5b3094211', xmitsInProgress=0}:Exception transfering block BP-547967001-148.251.75.209-1686077653604:blk_1073741851_1033 to mirror 127.0.0.1:43943: java.net.ConnectException: Connection refused 2023-06-06 18:54:42,843 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-547967001-148.251.75.209-1686077653604:blk_1073741851_1033 2023-06-06 18:54:42,843 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1805093127_17 at /127.0.0.1:33028 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741851_1033]] datanode.DataXceiver(323): 127.0.0.1:42237:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33028 dst: /127.0.0.1:42237 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:42,844 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK] 2023-06-06 18:54:42,844 WARN [IPC Server handler 4 on default port 44371] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-06-06 18:54:42,844 WARN [IPC Server handler 4 on default port 44371] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-06-06 18:54:42,845 WARN [IPC Server handler 4 on default port 44371] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-06-06 18:54:43,041 WARN [sync.2] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42237,DS-943bd8cf-1fd1-4d6f-8a71-8c7611b8ddbe,DISK]] 2023-06-06 18:54:43,042 WARN [sync.2] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42237,DS-943bd8cf-1fd1-4d6f-8a71-8c7611b8ddbe,DISK]] 2023-06-06 18:54:43,042 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C41189%2C1686077654167:(num 1686077682816) roll requested 2023-06-06 18:54:43,048 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1805093127_17 at /127.0.0.1:33036 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741853_1035]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data10/current]'}, localName='127.0.0.1:42237', datanodeUuid='651c49c0-a949-4a2a-86b6-f4d5b3094211', xmitsInProgress=0}:Exception transfering block BP-547967001-148.251.75.209-1686077653604:blk_1073741853_1035 to mirror 127.0.0.1:43943: java.net.ConnectException: Connection refused 2023-06-06 18:54:43,048 WARN [Thread-662] hdfs.DataStreamer(1658): Abandoning BP-547967001-148.251.75.209-1686077653604:blk_1073741853_1035 2023-06-06 18:54:43,048 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1805093127_17 at /127.0.0.1:33036 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741853_1035]] datanode.DataXceiver(323): 127.0.0.1:42237:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33036 dst: /127.0.0.1:42237 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:43,049 WARN [Thread-662] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK] 2023-06-06 18:54:43,051 WARN [Thread-662] hdfs.DataStreamer(1658): Abandoning BP-547967001-148.251.75.209-1686077653604:blk_1073741854_1036 2023-06-06 18:54:43,051 WARN [Thread-662] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34465,DS-0144582b-86cf-4961-a8ca-468d58cee2ef,DISK] 2023-06-06 18:54:43,053 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1805093127_17 at /127.0.0.1:33048 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741855_1037]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data10/current]'}, localName='127.0.0.1:42237', datanodeUuid='651c49c0-a949-4a2a-86b6-f4d5b3094211', xmitsInProgress=0}:Exception transfering block BP-547967001-148.251.75.209-1686077653604:blk_1073741855_1037 to mirror 127.0.0.1:37031: java.net.ConnectException: Connection refused 2023-06-06 18:54:43,053 WARN [Thread-662] hdfs.DataStreamer(1658): Abandoning BP-547967001-148.251.75.209-1686077653604:blk_1073741855_1037 2023-06-06 18:54:43,054 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1805093127_17 at /127.0.0.1:33048 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741855_1037]] datanode.DataXceiver(323): 127.0.0.1:42237:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33048 dst: /127.0.0.1:42237 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:43,054 WARN [Thread-662] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37031,DS-99ad7fc0-7940-4a82-936a-54815518e387,DISK] 2023-06-06 18:54:43,055 WARN [Thread-662] hdfs.DataStreamer(1658): Abandoning BP-547967001-148.251.75.209-1686077653604:blk_1073741856_1038 2023-06-06 18:54:43,056 WARN [Thread-662] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41141,DS-36673a27-2940-4165-8eb4-a83fa09224fc,DISK] 2023-06-06 18:54:43,056 WARN [IPC Server handler 0 on default port 44371] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-06-06 18:54:43,056 WARN [IPC Server handler 0 on default port 44371] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-06-06 18:54:43,056 WARN [IPC Server handler 0 on default port 44371] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-06-06 18:54:43,061 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.1686077682816 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.1686077683042 2023-06-06 18:54:43,061 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42237,DS-943bd8cf-1fd1-4d6f-8a71-8c7611b8ddbe,DISK]] 2023-06-06 18:54:43,061 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.1686077682816 is not closed yet, will try archiving it next time 2023-06-06 18:54:43,246 WARN [sync.4] wal.FSHLog(757): Too many consecutive RollWriter requests, it's a sign of the total number of live datanodes is lower than the tolerable replicas. 2023-06-06 18:54:43,251 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=12 (bloomFilter=true), to=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a8ab02613e1a565df840dc6f149757b/.tmp/info/e577a1b6e1334dbbaf198eed4c596536 2023-06-06 18:54:43,264 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a8ab02613e1a565df840dc6f149757b/.tmp/info/e577a1b6e1334dbbaf198eed4c596536 as hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a8ab02613e1a565df840dc6f149757b/info/e577a1b6e1334dbbaf198eed4c596536 2023-06-06 18:54:43,270 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a8ab02613e1a565df840dc6f149757b/info/e577a1b6e1334dbbaf198eed4c596536, entries=5, sequenceid=12, filesize=10.0 K 2023-06-06 18:54:43,271 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=9.45 KB/9681 for 0a8ab02613e1a565df840dc6f149757b in 448ms, sequenceid=12, compaction requested=false 2023-06-06 18:54:43,272 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 0a8ab02613e1a565df840dc6f149757b: 2023-06-06 18:54:43,451 WARN [Listener at localhost.localdomain/42037] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-06 18:54:43,453 WARN [Listener at localhost.localdomain/42037] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:54:43,455 INFO [Listener at localhost.localdomain/42037] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:54:43,461 INFO [Listener at localhost.localdomain/42037] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/java.io.tmpdir/Jetty_localhost_38415_datanode____.9ok5hi/webapp 2023-06-06 18:54:43,465 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.1686077666478 to hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/oldWALs/jenkins-hbase20.apache.org%2C41189%2C1686077654167.1686077666478 2023-06-06 18:54:43,534 INFO [Listener at localhost.localdomain/42037] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38415 2023-06-06 18:54:43,542 WARN [Listener at localhost.localdomain/43229] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:54:43,625 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa0793402845f4c67: Processing first storage report for DS-99ad7fc0-7940-4a82-936a-54815518e387 from datanode 3664d1ed-04b7-4984-8af6-3273fcedd7ee 2023-06-06 18:54:43,626 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa0793402845f4c67: from storage DS-99ad7fc0-7940-4a82-936a-54815518e387 node DatanodeRegistration(127.0.0.1:40563, datanodeUuid=3664d1ed-04b7-4984-8af6-3273fcedd7ee, infoPort=36723, infoSecurePort=0, ipcPort=43229, storageInfo=lv=-57;cid=testClusterID;nsid=1580867112;c=1686077653604), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-06 18:54:43,627 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa0793402845f4c67: Processing first storage report for DS-2eda66b2-5b6e-465f-b5a5-aec18bf6337b from datanode 3664d1ed-04b7-4984-8af6-3273fcedd7ee 2023-06-06 18:54:43,627 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa0793402845f4c67: from storage DS-2eda66b2-5b6e-465f-b5a5-aec18bf6337b node DatanodeRegistration(127.0.0.1:40563, datanodeUuid=3664d1ed-04b7-4984-8af6-3273fcedd7ee, infoPort=36723, infoSecurePort=0, ipcPort=43229, storageInfo=lv=-57;cid=testClusterID;nsid=1580867112;c=1686077653604), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:54:44,196 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@53042054] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:42237, datanodeUuid=651c49c0-a949-4a2a-86b6-f4d5b3094211, infoPort=35451, infoSecurePort=0, ipcPort=42037, storageInfo=lv=-57;cid=testClusterID;nsid=1580867112;c=1686077653604):Failed to transfer BP-547967001-148.251.75.209-1686077653604:blk_1073741852_1034 to 127.0.0.1:41141 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:44,355 WARN [master/jenkins-hbase20:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:54:44,356 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C45701%2C1686077654122:(num 1686077654285) roll requested 2023-06-06 18:54:44,365 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:54:44,366 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_666690949_17 at /127.0.0.1:43202 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741858_1040]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data3/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data4/current]'}, localName='127.0.0.1:40563', datanodeUuid='3664d1ed-04b7-4984-8af6-3273fcedd7ee', xmitsInProgress=0}:Exception transfering block BP-547967001-148.251.75.209-1686077653604:blk_1073741858_1040 to mirror 127.0.0.1:41141: java.net.ConnectException: Connection refused 2023-06-06 18:54:44,367 WARN [Thread-705] hdfs.DataStreamer(1658): Abandoning BP-547967001-148.251.75.209-1686077653604:blk_1073741858_1040 2023-06-06 18:54:44,366 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:54:44,368 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_666690949_17 at /127.0.0.1:43202 [Receiving block BP-547967001-148.251.75.209-1686077653604:blk_1073741858_1040]] datanode.DataXceiver(323): 127.0.0.1:40563:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:43202 dst: /127.0.0.1:40563 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:44,369 WARN [Thread-705] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41141,DS-36673a27-2940-4165-8eb4-a83fa09224fc,DISK] 2023-06-06 18:54:44,370 WARN [Thread-705] hdfs.DataStreamer(1658): Abandoning BP-547967001-148.251.75.209-1686077653604:blk_1073741859_1041 2023-06-06 18:54:44,371 WARN [Thread-705] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34465,DS-0144582b-86cf-4961-a8ca-468d58cee2ef,DISK] 2023-06-06 18:54:44,379 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-06-06 18:54:44,380 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/MasterData/WALs/jenkins-hbase20.apache.org,45701,1686077654122/jenkins-hbase20.apache.org%2C45701%2C1686077654122.1686077654285 with entries=88, filesize=43.75 KB; new WAL /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/MasterData/WALs/jenkins-hbase20.apache.org,45701,1686077654122/jenkins-hbase20.apache.org%2C45701%2C1686077654122.1686077684356 2023-06-06 18:54:44,380 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42237,DS-943bd8cf-1fd1-4d6f-8a71-8c7611b8ddbe,DISK], DatanodeInfoWithStorage[127.0.0.1:40563,DS-99ad7fc0-7940-4a82-936a-54815518e387,DISK]] 2023-06-06 18:54:44,380 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/MasterData/WALs/jenkins-hbase20.apache.org,45701,1686077654122/jenkins-hbase20.apache.org%2C45701%2C1686077654122.1686077654285 is not closed yet, will try archiving it next time 2023-06-06 18:54:44,380 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:54:44,381 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/MasterData/WALs/jenkins-hbase20.apache.org,45701,1686077654122/jenkins-hbase20.apache.org%2C45701%2C1686077654122.1686077654285; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:54:45,196 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@f94fbc1] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:42237, datanodeUuid=651c49c0-a949-4a2a-86b6-f4d5b3094211, infoPort=35451, infoSecurePort=0, ipcPort=42037, storageInfo=lv=-57;cid=testClusterID;nsid=1580867112;c=1686077653604):Failed to transfer BP-547967001-148.251.75.209-1686077653604:blk_1073741848_1030 to 127.0.0.1:41141 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:56,628 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@2d3c289e] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40563, datanodeUuid=3664d1ed-04b7-4984-8af6-3273fcedd7ee, infoPort=36723, infoSecurePort=0, ipcPort=43229, storageInfo=lv=-57;cid=testClusterID;nsid=1580867112;c=1686077653604):Failed to transfer BP-547967001-148.251.75.209-1686077653604:blk_1073741837_1013 to 127.0.0.1:41141 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:56,628 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@5330c05c] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40563, datanodeUuid=3664d1ed-04b7-4984-8af6-3273fcedd7ee, infoPort=36723, infoSecurePort=0, ipcPort=43229, storageInfo=lv=-57;cid=testClusterID;nsid=1580867112;c=1686077653604):Failed to transfer BP-547967001-148.251.75.209-1686077653604:blk_1073741835_1011 to 127.0.0.1:34465 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:57,627 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@58fa8b51] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40563, datanodeUuid=3664d1ed-04b7-4984-8af6-3273fcedd7ee, infoPort=36723, infoSecurePort=0, ipcPort=43229, storageInfo=lv=-57;cid=testClusterID;nsid=1580867112;c=1686077653604):Failed to transfer BP-547967001-148.251.75.209-1686077653604:blk_1073741831_1007 to 127.0.0.1:41141 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:57,627 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@4669bc47] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40563, datanodeUuid=3664d1ed-04b7-4984-8af6-3273fcedd7ee, infoPort=36723, infoSecurePort=0, ipcPort=43229, storageInfo=lv=-57;cid=testClusterID;nsid=1580867112;c=1686077653604):Failed to transfer BP-547967001-148.251.75.209-1686077653604:blk_1073741827_1003 to 127.0.0.1:41141 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:54:58,758 WARN [ReplicationMonitor] net.NetworkTopology(362): The cluster does not contain node: /default-rack/127.0.0.1:34465 2023-06-06 18:54:59,626 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@add04cc] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40563, datanodeUuid=3664d1ed-04b7-4984-8af6-3273fcedd7ee, infoPort=36723, infoSecurePort=0, ipcPort=43229, storageInfo=lv=-57;cid=testClusterID;nsid=1580867112;c=1686077653604):Failed to transfer BP-547967001-148.251.75.209-1686077653604:blk_1073741828_1004 to 127.0.0.1:34465 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:55:02,083 INFO [Listener at localhost.localdomain/43229] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.1686077683042 with entries=2, filesize=1.57 KB; new WAL /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.1686077702061 2023-06-06 18:55:02,083 DEBUG [Listener at localhost.localdomain/43229] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40563,DS-99ad7fc0-7940-4a82-936a-54815518e387,DISK], DatanodeInfoWithStorage[127.0.0.1:42237,DS-943bd8cf-1fd1-4d6f-8a71-8c7611b8ddbe,DISK]] 2023-06-06 18:55:02,083 DEBUG [Listener at localhost.localdomain/43229] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.1686077683042 is not closed yet, will try archiving it next time 2023-06-06 18:55:02,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41189] regionserver.HRegion(9158): Flush requested on 0a8ab02613e1a565df840dc6f149757b 2023-06-06 18:55:02,088 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 0a8ab02613e1a565df840dc6f149757b 1/1 column families, dataSize=10.50 KB heapSize=11.50 KB 2023-06-06 18:55:02,090 INFO [sync.3] wal.FSHLog(774): LowReplication-Roller was enabled. 2023-06-06 18:55:02,118 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-06 18:55:02,118 INFO [Listener at localhost.localdomain/43229] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-06-06 18:55:02,119 DEBUG [Listener at localhost.localdomain/43229] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x402f9e8f to 127.0.0.1:62595 2023-06-06 18:55:02,119 DEBUG [Listener at localhost.localdomain/43229] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:55:02,119 DEBUG [Listener at localhost.localdomain/43229] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-06 18:55:02,119 DEBUG [Listener at localhost.localdomain/43229] util.JVMClusterUtil(257): Found active master hash=1327224955, stopped=false 2023-06-06 18:55:02,119 INFO [Listener at localhost.localdomain/43229] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,45701,1686077654122 2023-06-06 18:55:02,121 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): regionserver:41189-0x101c1c55c7a0001, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-06 18:55:02,121 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-06 18:55:02,121 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:55:02,121 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): regionserver:36235-0x101c1c55c7a0005, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-06 18:55:02,121 INFO [Listener at localhost.localdomain/43229] procedure2.ProcedureExecutor(629): Stopping 2023-06-06 18:55:02,122 DEBUG [Listener at localhost.localdomain/43229] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6b6c2b66 to 127.0.0.1:62595 2023-06-06 18:55:02,122 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41189-0x101c1c55c7a0001, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:55:02,122 DEBUG [Listener at localhost.localdomain/43229] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:55:02,122 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:55:02,122 INFO [Listener at localhost.localdomain/43229] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,41189,1686077654167' ***** 2023-06-06 18:55:02,122 INFO [Listener at localhost.localdomain/43229] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-06 18:55:02,122 INFO [Listener at localhost.localdomain/43229] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,36235,1686077655523' ***** 2023-06-06 18:55:02,122 INFO [Listener at localhost.localdomain/43229] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-06 18:55:02,132 INFO [RS:1;jenkins-hbase20:36235] regionserver.HeapMemoryManager(220): Stopping 2023-06-06 18:55:02,132 INFO [RS:0;jenkins-hbase20:41189] regionserver.HeapMemoryManager(220): Stopping 2023-06-06 18:55:02,134 INFO [RS:1;jenkins-hbase20:36235] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-06 18:55:02,134 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-06 18:55:02,134 INFO [RS:1;jenkins-hbase20:36235] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-06 18:55:02,134 INFO [RS:1;jenkins-hbase20:36235] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,36235,1686077655523 2023-06-06 18:55:02,134 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36235-0x101c1c55c7a0005, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:55:02,134 DEBUG [RS:1;jenkins-hbase20:36235] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7ab29bf0 to 127.0.0.1:62595 2023-06-06 18:55:02,134 DEBUG [RS:1;jenkins-hbase20:36235] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:55:02,134 INFO [RS:1;jenkins-hbase20:36235] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,36235,1686077655523; all regions closed. 2023-06-06 18:55:02,136 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,36235,1686077655523 2023-06-06 18:55:02,136 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:02,147 ERROR [RS:1;jenkins-hbase20:36235] regionserver.HRegionServer(1539): Shutdown / close of WAL failed: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK]] are bad. Aborting... 2023-06-06 18:55:02,147 DEBUG [RS:1;jenkins-hbase20:36235] regionserver.HRegionServer(1540): Shutdown / close exception details: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:02,147 DEBUG [RS:1;jenkins-hbase20:36235] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:55:02,147 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.50 KB at sequenceid=25 (bloomFilter=true), to=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a8ab02613e1a565df840dc6f149757b/.tmp/info/48928c335abb4c61a44f856957dadc84 2023-06-06 18:55:02,147 INFO [RS:1;jenkins-hbase20:36235] regionserver.LeaseManager(133): Closed leases 2023-06-06 18:55:02,147 INFO [RS:1;jenkins-hbase20:36235] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-06-06 18:55:02,147 INFO [RS:1;jenkins-hbase20:36235] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-06 18:55:02,147 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-06 18:55:02,148 INFO [RS:1;jenkins-hbase20:36235] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-06 18:55:02,148 INFO [RS:1;jenkins-hbase20:36235] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-06 18:55:02,148 INFO [RS:1;jenkins-hbase20:36235] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:36235 2023-06-06 18:55:02,152 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): regionserver:41189-0x101c1c55c7a0001, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,36235,1686077655523 2023-06-06 18:55:02,152 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-06 18:55:02,152 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): regionserver:36235-0x101c1c55c7a0005, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,36235,1686077655523 2023-06-06 18:55:02,152 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): regionserver:41189-0x101c1c55c7a0001, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-06 18:55:02,152 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): regionserver:36235-0x101c1c55c7a0005, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-06 18:55:02,153 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,36235,1686077655523] 2023-06-06 18:55:02,153 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,36235,1686077655523; numProcessing=1 2023-06-06 18:55:02,171 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,36235,1686077655523 already deleted, retry=false 2023-06-06 18:55:02,172 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,36235,1686077655523 expired; onlineServers=1 2023-06-06 18:55:02,174 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a8ab02613e1a565df840dc6f149757b/.tmp/info/48928c335abb4c61a44f856957dadc84 as hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a8ab02613e1a565df840dc6f149757b/info/48928c335abb4c61a44f856957dadc84 2023-06-06 18:55:02,181 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a8ab02613e1a565df840dc6f149757b/info/48928c335abb4c61a44f856957dadc84, entries=8, sequenceid=25, filesize=13.2 K 2023-06-06 18:55:02,183 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.50 KB/10757, heapSize ~11.48 KB/11760, currentSize=9.46 KB/9684 for 0a8ab02613e1a565df840dc6f149757b in 95ms, sequenceid=25, compaction requested=false 2023-06-06 18:55:02,183 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 0a8ab02613e1a565df840dc6f149757b: 2023-06-06 18:55:02,183 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=23.2 K, sizeToCheck=16.0 K 2023-06-06 18:55:02,183 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-06 18:55:02,183 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a8ab02613e1a565df840dc6f149757b/info/48928c335abb4c61a44f856957dadc84 because midkey is the same as first or last row 2023-06-06 18:55:02,183 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-06 18:55:02,184 INFO [RS:0;jenkins-hbase20:41189] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-06 18:55:02,184 INFO [RS:0;jenkins-hbase20:41189] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-06 18:55:02,184 INFO [RS:0;jenkins-hbase20:41189] regionserver.HRegionServer(3303): Received CLOSE for 0a8ab02613e1a565df840dc6f149757b 2023-06-06 18:55:02,184 INFO [RS:0;jenkins-hbase20:41189] regionserver.HRegionServer(3303): Received CLOSE for 55728f232185f026efb1140d168ef73d 2023-06-06 18:55:02,184 INFO [RS:0;jenkins-hbase20:41189] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,41189,1686077654167 2023-06-06 18:55:02,184 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 0a8ab02613e1a565df840dc6f149757b, disabling compactions & flushes 2023-06-06 18:55:02,184 DEBUG [RS:0;jenkins-hbase20:41189] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x07a4e54a to 127.0.0.1:62595 2023-06-06 18:55:02,184 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1686077655628.0a8ab02613e1a565df840dc6f149757b. 2023-06-06 18:55:02,184 DEBUG [RS:0;jenkins-hbase20:41189] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:55:02,184 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1686077655628.0a8ab02613e1a565df840dc6f149757b. 2023-06-06 18:55:02,184 INFO [RS:0;jenkins-hbase20:41189] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-06 18:55:02,185 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1686077655628.0a8ab02613e1a565df840dc6f149757b. after waiting 0 ms 2023-06-06 18:55:02,185 INFO [RS:0;jenkins-hbase20:41189] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-06 18:55:02,185 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1686077655628.0a8ab02613e1a565df840dc6f149757b. 2023-06-06 18:55:02,185 INFO [RS:0;jenkins-hbase20:41189] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-06 18:55:02,185 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 0a8ab02613e1a565df840dc6f149757b 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-06-06 18:55:02,185 INFO [RS:0;jenkins-hbase20:41189] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-06 18:55:02,185 INFO [RS:0;jenkins-hbase20:41189] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-06-06 18:55:02,185 DEBUG [RS:0;jenkins-hbase20:41189] regionserver.HRegionServer(1478): Online Regions={0a8ab02613e1a565df840dc6f149757b=TestLogRolling-testLogRollOnDatanodeDeath,,1686077655628.0a8ab02613e1a565df840dc6f149757b., 1588230740=hbase:meta,,1.1588230740, 55728f232185f026efb1140d168ef73d=hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d.} 2023-06-06 18:55:02,186 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-06 18:55:02,186 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-06 18:55:02,186 DEBUG [RS:0;jenkins-hbase20:41189] regionserver.HRegionServer(1504): Waiting on 0a8ab02613e1a565df840dc6f149757b, 1588230740, 55728f232185f026efb1140d168ef73d 2023-06-06 18:55:02,186 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-06 18:55:02,186 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-06 18:55:02,186 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-06 18:55:02,187 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.93 KB heapSize=5.45 KB 2023-06-06 18:55:02,187 WARN [RS_OPEN_META-regionserver/jenkins-hbase20:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:02,188 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C41189%2C1686077654167.meta:.meta(num 1686077654745) roll requested 2023-06-06 18:55:02,188 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-06 18:55:02,188 ERROR [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase20.apache.org,41189,1686077654167: Unrecoverable exception while closing hbase:meta,,1.1588230740 ***** org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:02,189 ERROR [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-06-06 18:55:02,192 WARN [Thread-737] hdfs.DataStreamer(1658): Abandoning BP-547967001-148.251.75.209-1686077653604:blk_1073741863_1045 2023-06-06 18:55:02,193 WARN [Thread-737] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41141,DS-36673a27-2940-4165-8eb4-a83fa09224fc,DISK] 2023-06-06 18:55:02,193 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-06-06 18:55:02,194 WARN [Thread-738] hdfs.DataStreamer(1658): Abandoning BP-547967001-148.251.75.209-1686077653604:blk_1073741864_1046 2023-06-06 18:55:02,194 WARN [Thread-738] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41141,DS-36673a27-2940-4165-8eb4-a83fa09224fc,DISK] 2023-06-06 18:55:02,195 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-06-06 18:55:02,195 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-06-06 18:55:02,195 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-06-06 18:55:02,196 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1011875840, "init": 524288000, "max": 2051014656, "used": 333185760 }, "NonHeapMemoryUsage": { "committed": 133521408, "init": 2555904, "max": -1, "used": 130878432 }, "Verbose": false, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-06-06 18:55:02,201 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45701] master.MasterRpcServices(609): jenkins-hbase20.apache.org,41189,1686077654167 reported a fatal error: ***** ABORTING region server jenkins-hbase20.apache.org,41189,1686077654167: Unrecoverable exception while closing hbase:meta,,1.1588230740 ***** Cause: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:02,213 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL 2023-06-06 18:55:02,213 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.meta.1686077654745.meta with entries=11, filesize=3.69 KB; new WAL /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.meta.1686077702188.meta 2023-06-06 18:55:02,216 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42237,DS-943bd8cf-1fd1-4d6f-8a71-8c7611b8ddbe,DISK], DatanodeInfoWithStorage[127.0.0.1:40563,DS-99ad7fc0-7940-4a82-936a-54815518e387,DISK]] 2023-06-06 18:55:02,218 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:02,218 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.meta.1686077654745.meta is not closed yet, will try archiving it next time 2023-06-06 18:55:02,218 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167/jenkins-hbase20.apache.org%2C41189%2C1686077654167.meta.1686077654745.meta; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43943,DS-3432beca-d8e2-4cb3-b6a9-325a89b5ed2e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:02,223 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=37 (bloomFilter=true), to=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a8ab02613e1a565df840dc6f149757b/.tmp/info/fcbb4c688e524e07a9c2dd79ced5386d 2023-06-06 18:55:02,231 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a8ab02613e1a565df840dc6f149757b/.tmp/info/fcbb4c688e524e07a9c2dd79ced5386d as hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a8ab02613e1a565df840dc6f149757b/info/fcbb4c688e524e07a9c2dd79ced5386d 2023-06-06 18:55:02,238 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a8ab02613e1a565df840dc6f149757b/info/fcbb4c688e524e07a9c2dd79ced5386d, entries=9, sequenceid=37, filesize=14.2 K 2023-06-06 18:55:02,239 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=0 B/0 for 0a8ab02613e1a565df840dc6f149757b in 54ms, sequenceid=37, compaction requested=true 2023-06-06 18:55:02,249 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a8ab02613e1a565df840dc6f149757b/recovered.edits/40.seqid, newMaxSeqId=40, maxSeqId=1 2023-06-06 18:55:02,250 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1686077655628.0a8ab02613e1a565df840dc6f149757b. 2023-06-06 18:55:02,250 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 0a8ab02613e1a565df840dc6f149757b: 2023-06-06 18:55:02,251 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1686077655628.0a8ab02613e1a565df840dc6f149757b. 2023-06-06 18:55:02,251 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 55728f232185f026efb1140d168ef73d, disabling compactions & flushes 2023-06-06 18:55:02,251 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d. 2023-06-06 18:55:02,251 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d. 2023-06-06 18:55:02,251 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d. after waiting 0 ms 2023-06-06 18:55:02,251 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d. 2023-06-06 18:55:02,251 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 55728f232185f026efb1140d168ef73d: 2023-06-06 18:55:02,251 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d. 2023-06-06 18:55:02,386 INFO [RS:0;jenkins-hbase20:41189] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-06 18:55:02,387 INFO [RS:0;jenkins-hbase20:41189] regionserver.HRegionServer(3303): Received CLOSE for 55728f232185f026efb1140d168ef73d 2023-06-06 18:55:02,387 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-06 18:55:02,387 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 55728f232185f026efb1140d168ef73d, disabling compactions & flushes 2023-06-06 18:55:02,387 DEBUG [RS:0;jenkins-hbase20:41189] regionserver.HRegionServer(1504): Waiting on 1588230740, 55728f232185f026efb1140d168ef73d 2023-06-06 18:55:02,387 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-06 18:55:02,387 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d. 2023-06-06 18:55:02,387 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-06 18:55:02,387 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d. 2023-06-06 18:55:02,387 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-06 18:55:02,387 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d. after waiting 0 ms 2023-06-06 18:55:02,387 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-06 18:55:02,387 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d. 2023-06-06 18:55:02,387 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-06 18:55:02,387 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 55728f232185f026efb1140d168ef73d: 2023-06-06 18:55:02,387 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-06-06 18:55:02,387 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1686077654819.55728f232185f026efb1140d168ef73d. 2023-06-06 18:55:02,422 INFO [RS:1;jenkins-hbase20:36235] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,36235,1686077655523; zookeeper connection closed. 2023-06-06 18:55:02,422 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): regionserver:36235-0x101c1c55c7a0005, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:55:02,423 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): regionserver:36235-0x101c1c55c7a0005, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:55:02,423 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@34d6a4c] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@34d6a4c 2023-06-06 18:55:02,441 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-06 18:55:02,470 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-06-06 18:55:02,470 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-06-06 18:55:02,587 INFO [RS:0;jenkins-hbase20:41189] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-06-06 18:55:02,587 INFO [RS:0;jenkins-hbase20:41189] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,41189,1686077654167; all regions closed. 2023-06-06 18:55:02,587 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167 2023-06-06 18:55:02,595 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/WALs/jenkins-hbase20.apache.org,41189,1686077654167 2023-06-06 18:55:02,601 DEBUG [RS:0;jenkins-hbase20:41189] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:55:02,601 INFO [RS:0;jenkins-hbase20:41189] regionserver.LeaseManager(133): Closed leases 2023-06-06 18:55:02,602 INFO [RS:0;jenkins-hbase20:41189] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-06-06 18:55:02,602 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-06 18:55:02,602 INFO [RS:0;jenkins-hbase20:41189] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:41189 2023-06-06 18:55:02,604 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): regionserver:41189-0x101c1c55c7a0001, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,41189,1686077654167 2023-06-06 18:55:02,605 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-06 18:55:02,607 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,41189,1686077654167] 2023-06-06 18:55:02,607 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,41189,1686077654167; numProcessing=2 2023-06-06 18:55:02,608 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,41189,1686077654167 already deleted, retry=false 2023-06-06 18:55:02,608 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,41189,1686077654167 expired; onlineServers=0 2023-06-06 18:55:02,608 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,45701,1686077654122' ***** 2023-06-06 18:55:02,608 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-06 18:55:02,608 DEBUG [M:0;jenkins-hbase20:45701] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4b245540, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-06 18:55:02,608 INFO [M:0;jenkins-hbase20:45701] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,45701,1686077654122 2023-06-06 18:55:02,608 INFO [M:0;jenkins-hbase20:45701] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,45701,1686077654122; all regions closed. 2023-06-06 18:55:02,608 DEBUG [M:0;jenkins-hbase20:45701] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:55:02,609 DEBUG [M:0;jenkins-hbase20:45701] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-06 18:55:02,609 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-06 18:55:02,609 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686077654359] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686077654359,5,FailOnTimeoutGroup] 2023-06-06 18:55:02,609 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686077654358] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686077654358,5,FailOnTimeoutGroup] 2023-06-06 18:55:02,609 DEBUG [M:0;jenkins-hbase20:45701] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-06 18:55:02,610 INFO [M:0;jenkins-hbase20:45701] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-06 18:55:02,610 INFO [M:0;jenkins-hbase20:45701] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-06 18:55:02,610 INFO [M:0;jenkins-hbase20:45701] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-06-06 18:55:02,611 DEBUG [M:0;jenkins-hbase20:45701] master.HMaster(1512): Stopping service threads 2023-06-06 18:55:02,611 INFO [M:0;jenkins-hbase20:45701] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-06 18:55:02,612 ERROR [M:0;jenkins-hbase20:45701] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-06 18:55:02,612 INFO [M:0;jenkins-hbase20:45701] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-06 18:55:02,612 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-06 18:55:02,612 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-06 18:55:02,612 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:55:02,613 DEBUG [M:0;jenkins-hbase20:45701] zookeeper.ZKUtil(398): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-06 18:55:02,613 WARN [M:0;jenkins-hbase20:45701] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-06 18:55:02,613 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-06 18:55:02,613 INFO [M:0;jenkins-hbase20:45701] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-06 18:55:02,613 INFO [M:0;jenkins-hbase20:45701] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-06 18:55:02,618 DEBUG [M:0;jenkins-hbase20:45701] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-06 18:55:02,618 INFO [M:0;jenkins-hbase20:45701] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:55:02,618 DEBUG [M:0;jenkins-hbase20:45701] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:55:02,618 DEBUG [M:0;jenkins-hbase20:45701] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-06 18:55:02,618 DEBUG [M:0;jenkins-hbase20:45701] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:55:02,619 INFO [M:0;jenkins-hbase20:45701] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.12 KB heapSize=45.77 KB 2023-06-06 18:55:02,626 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@5316af5e] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40563, datanodeUuid=3664d1ed-04b7-4984-8af6-3273fcedd7ee, infoPort=36723, infoSecurePort=0, ipcPort=43229, storageInfo=lv=-57;cid=testClusterID;nsid=1580867112;c=1686077653604):Failed to transfer BP-547967001-148.251.75.209-1686077653604:blk_1073741825_1001 to 127.0.0.1:41141 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:55:02,627 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@5816aaff] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40563, datanodeUuid=3664d1ed-04b7-4984-8af6-3273fcedd7ee, infoPort=36723, infoSecurePort=0, ipcPort=43229, storageInfo=lv=-57;cid=testClusterID;nsid=1580867112;c=1686077653604):Failed to transfer BP-547967001-148.251.75.209-1686077653604:blk_1073741836_1012 to 127.0.0.1:41141 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:55:02,627 WARN [Thread-752] hdfs.DataStreamer(1658): Abandoning BP-547967001-148.251.75.209-1686077653604:blk_1073741867_1049 2023-06-06 18:55:02,627 WARN [Thread-752] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41141,DS-36673a27-2940-4165-8eb4-a83fa09224fc,DISK] 2023-06-06 18:55:02,637 INFO [M:0;jenkins-hbase20:45701] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.12 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9bd7c97ff3ee492598ce1c33a44ba17a 2023-06-06 18:55:02,644 DEBUG [M:0;jenkins-hbase20:45701] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9bd7c97ff3ee492598ce1c33a44ba17a as hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9bd7c97ff3ee492598ce1c33a44ba17a 2023-06-06 18:55:02,652 INFO [M:0;jenkins-hbase20:45701] regionserver.HStore(1080): Added hdfs://localhost.localdomain:44371/user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9bd7c97ff3ee492598ce1c33a44ba17a, entries=11, sequenceid=92, filesize=7.0 K 2023-06-06 18:55:02,653 INFO [M:0;jenkins-hbase20:45701] regionserver.HRegion(2948): Finished flush of dataSize ~38.12 KB/39035, heapSize ~45.75 KB/46848, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 35ms, sequenceid=92, compaction requested=false 2023-06-06 18:55:02,654 INFO [M:0;jenkins-hbase20:45701] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:55:02,654 DEBUG [M:0;jenkins-hbase20:45701] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-06 18:55:02,657 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/4d5f2423-2b9c-bc3b-46e3-b4e1407b4ec5/MasterData/WALs/jenkins-hbase20.apache.org,45701,1686077654122 2023-06-06 18:55:02,663 INFO [M:0;jenkins-hbase20:45701] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-06 18:55:02,663 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-06 18:55:02,663 INFO [M:0;jenkins-hbase20:45701] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:45701 2023-06-06 18:55:02,665 DEBUG [M:0;jenkins-hbase20:45701] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,45701,1686077654122 already deleted, retry=false 2023-06-06 18:55:02,723 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): regionserver:41189-0x101c1c55c7a0001, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:55:02,723 INFO [RS:0;jenkins-hbase20:41189] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,41189,1686077654167; zookeeper connection closed. 2023-06-06 18:55:02,723 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): regionserver:41189-0x101c1c55c7a0001, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:55:02,724 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@73cbac80] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@73cbac80 2023-06-06 18:55:02,724 INFO [Listener at localhost.localdomain/43229] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 2 regionserver(s) complete 2023-06-06 18:55:02,823 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:55:02,824 INFO [M:0;jenkins-hbase20:45701] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,45701,1686077654122; zookeeper connection closed. 2023-06-06 18:55:02,824 DEBUG [Listener at localhost.localdomain/33801-EventThread] zookeeper.ZKWatcher(600): master:45701-0x101c1c55c7a0000, quorum=127.0.0.1:62595, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:55:02,825 WARN [Listener at localhost.localdomain/43229] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-06 18:55:02,831 INFO [Listener at localhost.localdomain/43229] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-06 18:55:02,936 WARN [BP-547967001-148.251.75.209-1686077653604 heartbeating to localhost.localdomain/127.0.0.1:44371] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-06 18:55:02,936 WARN [BP-547967001-148.251.75.209-1686077653604 heartbeating to localhost.localdomain/127.0.0.1:44371] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-547967001-148.251.75.209-1686077653604 (Datanode Uuid 3664d1ed-04b7-4984-8af6-3273fcedd7ee) service to localhost.localdomain/127.0.0.1:44371 2023-06-06 18:55:02,937 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data3/current/BP-547967001-148.251.75.209-1686077653604] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:55:02,937 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data4/current/BP-547967001-148.251.75.209-1686077653604] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:55:02,939 WARN [Listener at localhost.localdomain/43229] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-06 18:55:02,943 INFO [Listener at localhost.localdomain/43229] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-06 18:55:03,046 WARN [BP-547967001-148.251.75.209-1686077653604 heartbeating to localhost.localdomain/127.0.0.1:44371] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-06 18:55:03,046 WARN [BP-547967001-148.251.75.209-1686077653604 heartbeating to localhost.localdomain/127.0.0.1:44371] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-547967001-148.251.75.209-1686077653604 (Datanode Uuid 651c49c0-a949-4a2a-86b6-f4d5b3094211) service to localhost.localdomain/127.0.0.1:44371 2023-06-06 18:55:03,046 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data9/current/BP-547967001-148.251.75.209-1686077653604] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:55:03,047 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/cluster_6b4870d0-6ee8-3568-3db7-486800a7c15d/dfs/data/data10/current/BP-547967001-148.251.75.209-1686077653604] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:55:03,059 INFO [Listener at localhost.localdomain/43229] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-06-06 18:55:03,179 INFO [Listener at localhost.localdomain/43229] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-06 18:55:03,216 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-06 18:55:03,225 INFO [Listener at localhost.localdomain/43229] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=75 (was 52) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1961072667) connection to localhost.localdomain/127.0.0.1:44371 from jenkins.hfs.1 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: LeaseRenewer:jenkins.hfs.2@localhost.localdomain:44371 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/43229 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-3-worker-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:44371 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.1@localhost.localdomain:44371 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Abort regionserver monitor java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1961072667) connection to localhost.localdomain/127.0.0.1:44371 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-3-worker-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: IPC Client (1961072667) connection to localhost.localdomain/127.0.0.1:44371 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-14-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=458 (was 438) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=107 (was 109), ProcessCount=170 (was 170), AvailableMemoryMB=5917 (was 5857) - AvailableMemoryMB LEAK? - 2023-06-06 18:55:03,234 INFO [Listener at localhost.localdomain/43229] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=75, OpenFileDescriptor=458, MaxFileDescriptor=60000, SystemLoadAverage=107, ProcessCount=170, AvailableMemoryMB=5917 2023-06-06 18:55:03,235 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-06 18:55:03,235 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/hadoop.log.dir so I do NOT create it in target/test-data/b0759bad-17de-c03d-decf-04223cf518c6 2023-06-06 18:55:03,235 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c424e368-af58-851c-2fed-6acea85f9d6b/hadoop.tmp.dir so I do NOT create it in target/test-data/b0759bad-17de-c03d-decf-04223cf518c6 2023-06-06 18:55:03,235 INFO [Listener at localhost.localdomain/43229] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/cluster_98b1865e-0e9a-404b-75d5-f6c2adf7bbad, deleteOnExit=true 2023-06-06 18:55:03,235 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-06 18:55:03,235 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/test.cache.data in system properties and HBase conf 2023-06-06 18:55:03,236 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/hadoop.tmp.dir in system properties and HBase conf 2023-06-06 18:55:03,236 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/hadoop.log.dir in system properties and HBase conf 2023-06-06 18:55:03,236 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-06 18:55:03,236 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-06 18:55:03,236 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-06 18:55:03,236 DEBUG [Listener at localhost.localdomain/43229] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-06 18:55:03,237 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-06 18:55:03,237 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-06 18:55:03,237 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-06 18:55:03,237 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-06 18:55:03,237 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-06 18:55:03,237 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-06 18:55:03,238 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-06 18:55:03,238 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-06 18:55:03,238 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-06 18:55:03,238 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/nfs.dump.dir in system properties and HBase conf 2023-06-06 18:55:03,238 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/java.io.tmpdir in system properties and HBase conf 2023-06-06 18:55:03,238 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-06 18:55:03,238 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-06 18:55:03,239 INFO [Listener at localhost.localdomain/43229] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-06 18:55:03,240 WARN [Listener at localhost.localdomain/43229] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-06 18:55:03,242 WARN [Listener at localhost.localdomain/43229] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-06 18:55:03,242 WARN [Listener at localhost.localdomain/43229] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-06 18:55:03,276 WARN [Listener at localhost.localdomain/43229] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:55:03,279 INFO [Listener at localhost.localdomain/43229] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:55:03,284 INFO [Listener at localhost.localdomain/43229] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/java.io.tmpdir/Jetty_localhost_localdomain_38521_hdfs____p5y45l/webapp 2023-06-06 18:55:03,369 INFO [Listener at localhost.localdomain/43229] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:38521 2023-06-06 18:55:03,370 WARN [Listener at localhost.localdomain/43229] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-06 18:55:03,371 WARN [Listener at localhost.localdomain/43229] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-06 18:55:03,371 WARN [Listener at localhost.localdomain/43229] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-06 18:55:03,410 WARN [Listener at localhost.localdomain/38445] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:55:03,423 WARN [Listener at localhost.localdomain/38445] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-06 18:55:03,426 WARN [Listener at localhost.localdomain/38445] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:55:03,427 INFO [Listener at localhost.localdomain/38445] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:55:03,432 INFO [Listener at localhost.localdomain/38445] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/java.io.tmpdir/Jetty_localhost_33427_datanode____egva0w/webapp 2023-06-06 18:55:03,508 INFO [Listener at localhost.localdomain/38445] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33427 2023-06-06 18:55:03,514 WARN [Listener at localhost.localdomain/46367] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:55:03,537 WARN [Listener at localhost.localdomain/46367] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-06 18:55:03,541 WARN [Listener at localhost.localdomain/46367] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:55:03,542 INFO [Listener at localhost.localdomain/46367] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:55:03,550 INFO [Listener at localhost.localdomain/46367] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/java.io.tmpdir/Jetty_localhost_45219_datanode____.v8t6n4/webapp 2023-06-06 18:55:03,602 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-06 18:55:03,610 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4755cd59d31b358e: Processing first storage report for DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf from datanode cd877cd2-20e0-4aa8-af19-c1d518339db6 2023-06-06 18:55:03,610 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4755cd59d31b358e: from storage DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf node DatanodeRegistration(127.0.0.1:36895, datanodeUuid=cd877cd2-20e0-4aa8-af19-c1d518339db6, infoPort=36957, infoSecurePort=0, ipcPort=46367, storageInfo=lv=-57;cid=testClusterID;nsid=784152401;c=1686077703244), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:55:03,610 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4755cd59d31b358e: Processing first storage report for DS-255048da-a942-4dd6-b454-e9b728437cea from datanode cd877cd2-20e0-4aa8-af19-c1d518339db6 2023-06-06 18:55:03,610 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4755cd59d31b358e: from storage DS-255048da-a942-4dd6-b454-e9b728437cea node DatanodeRegistration(127.0.0.1:36895, datanodeUuid=cd877cd2-20e0-4aa8-af19-c1d518339db6, infoPort=36957, infoSecurePort=0, ipcPort=46367, storageInfo=lv=-57;cid=testClusterID;nsid=784152401;c=1686077703244), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:55:03,628 INFO [Listener at localhost.localdomain/46367] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45219 2023-06-06 18:55:03,637 WARN [Listener at localhost.localdomain/43891] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:55:03,702 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5a5d43fe57d77f6b: Processing first storage report for DS-c698d852-4961-4c87-ba24-7936fea50fed from datanode db51f135-7174-4ae8-8fe1-03cc8ecac6ee 2023-06-06 18:55:03,702 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5a5d43fe57d77f6b: from storage DS-c698d852-4961-4c87-ba24-7936fea50fed node DatanodeRegistration(127.0.0.1:44661, datanodeUuid=db51f135-7174-4ae8-8fe1-03cc8ecac6ee, infoPort=35501, infoSecurePort=0, ipcPort=43891, storageInfo=lv=-57;cid=testClusterID;nsid=784152401;c=1686077703244), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:55:03,702 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5a5d43fe57d77f6b: Processing first storage report for DS-c495a97d-f562-42e9-be37-45e6bc2a2353 from datanode db51f135-7174-4ae8-8fe1-03cc8ecac6ee 2023-06-06 18:55:03,702 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5a5d43fe57d77f6b: from storage DS-c495a97d-f562-42e9-be37-45e6bc2a2353 node DatanodeRegistration(127.0.0.1:44661, datanodeUuid=db51f135-7174-4ae8-8fe1-03cc8ecac6ee, infoPort=35501, infoSecurePort=0, ipcPort=43891, storageInfo=lv=-57;cid=testClusterID;nsid=784152401;c=1686077703244), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-06 18:55:03,746 DEBUG [Listener at localhost.localdomain/43891] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6 2023-06-06 18:55:03,749 INFO [Listener at localhost.localdomain/43891] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/cluster_98b1865e-0e9a-404b-75d5-f6c2adf7bbad/zookeeper_0, clientPort=61092, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/cluster_98b1865e-0e9a-404b-75d5-f6c2adf7bbad/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/cluster_98b1865e-0e9a-404b-75d5-f6c2adf7bbad/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-06 18:55:03,750 INFO [Listener at localhost.localdomain/43891] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=61092 2023-06-06 18:55:03,751 INFO [Listener at localhost.localdomain/43891] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:55:03,752 INFO [Listener at localhost.localdomain/43891] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:55:03,767 INFO [Listener at localhost.localdomain/43891] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5 with version=8 2023-06-06 18:55:03,767 INFO [Listener at localhost.localdomain/43891] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/hbase-staging 2023-06-06 18:55:03,769 INFO [Listener at localhost.localdomain/43891] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-06-06 18:55:03,769 INFO [Listener at localhost.localdomain/43891] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:55:03,769 INFO [Listener at localhost.localdomain/43891] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-06 18:55:03,769 INFO [Listener at localhost.localdomain/43891] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-06 18:55:03,769 INFO [Listener at localhost.localdomain/43891] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:55:03,769 INFO [Listener at localhost.localdomain/43891] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-06 18:55:03,769 INFO [Listener at localhost.localdomain/43891] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-06 18:55:03,770 INFO [Listener at localhost.localdomain/43891] ipc.NettyRpcServer(120): Bind to /148.251.75.209:36267 2023-06-06 18:55:03,771 INFO [Listener at localhost.localdomain/43891] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:55:03,771 INFO [Listener at localhost.localdomain/43891] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:55:03,772 INFO [Listener at localhost.localdomain/43891] zookeeper.RecoverableZooKeeper(93): Process identifier=master:36267 connecting to ZooKeeper ensemble=127.0.0.1:61092 2023-06-06 18:55:03,777 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:362670x0, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-06 18:55:03,777 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:36267-0x101c1c61e6f0000 connected 2023-06-06 18:55:03,790 DEBUG [Listener at localhost.localdomain/43891] zookeeper.ZKUtil(164): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-06 18:55:03,790 DEBUG [Listener at localhost.localdomain/43891] zookeeper.ZKUtil(164): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:55:03,791 DEBUG [Listener at localhost.localdomain/43891] zookeeper.ZKUtil(164): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-06 18:55:03,797 DEBUG [Listener at localhost.localdomain/43891] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36267 2023-06-06 18:55:03,797 DEBUG [Listener at localhost.localdomain/43891] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36267 2023-06-06 18:55:03,797 DEBUG [Listener at localhost.localdomain/43891] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36267 2023-06-06 18:55:03,798 DEBUG [Listener at localhost.localdomain/43891] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36267 2023-06-06 18:55:03,798 DEBUG [Listener at localhost.localdomain/43891] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36267 2023-06-06 18:55:03,798 INFO [Listener at localhost.localdomain/43891] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5, hbase.cluster.distributed=false 2023-06-06 18:55:03,809 INFO [Listener at localhost.localdomain/43891] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-06-06 18:55:03,809 INFO [Listener at localhost.localdomain/43891] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:55:03,809 INFO [Listener at localhost.localdomain/43891] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-06 18:55:03,809 INFO [Listener at localhost.localdomain/43891] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-06 18:55:03,810 INFO [Listener at localhost.localdomain/43891] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:55:03,810 INFO [Listener at localhost.localdomain/43891] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-06 18:55:03,810 INFO [Listener at localhost.localdomain/43891] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-06 18:55:03,811 INFO [Listener at localhost.localdomain/43891] ipc.NettyRpcServer(120): Bind to /148.251.75.209:37053 2023-06-06 18:55:03,811 INFO [Listener at localhost.localdomain/43891] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-06 18:55:03,812 DEBUG [Listener at localhost.localdomain/43891] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-06 18:55:03,813 INFO [Listener at localhost.localdomain/43891] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:55:03,814 INFO [Listener at localhost.localdomain/43891] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:55:03,814 INFO [Listener at localhost.localdomain/43891] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37053 connecting to ZooKeeper ensemble=127.0.0.1:61092 2023-06-06 18:55:03,820 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): regionserver:370530x0, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-06 18:55:03,821 DEBUG [Listener at localhost.localdomain/43891] zookeeper.ZKUtil(164): regionserver:370530x0, quorum=127.0.0.1:61092, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-06 18:55:03,822 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37053-0x101c1c61e6f0001 connected 2023-06-06 18:55:03,822 DEBUG [Listener at localhost.localdomain/43891] zookeeper.ZKUtil(164): regionserver:37053-0x101c1c61e6f0001, quorum=127.0.0.1:61092, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:55:03,823 DEBUG [Listener at localhost.localdomain/43891] zookeeper.ZKUtil(164): regionserver:37053-0x101c1c61e6f0001, quorum=127.0.0.1:61092, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-06 18:55:03,826 DEBUG [Listener at localhost.localdomain/43891] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37053 2023-06-06 18:55:03,826 DEBUG [Listener at localhost.localdomain/43891] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37053 2023-06-06 18:55:03,826 DEBUG [Listener at localhost.localdomain/43891] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37053 2023-06-06 18:55:03,826 DEBUG [Listener at localhost.localdomain/43891] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37053 2023-06-06 18:55:03,827 DEBUG [Listener at localhost.localdomain/43891] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37053 2023-06-06 18:55:03,828 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,36267,1686077703768 2023-06-06 18:55:03,832 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-06 18:55:03,833 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,36267,1686077703768 2023-06-06 18:55:03,834 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-06 18:55:03,834 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): regionserver:37053-0x101c1c61e6f0001, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-06 18:55:03,834 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:55:03,834 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-06 18:55:03,835 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,36267,1686077703768 from backup master directory 2023-06-06 18:55:03,835 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-06 18:55:03,836 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,36267,1686077703768 2023-06-06 18:55:03,836 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-06 18:55:03,836 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-06 18:55:03,836 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,36267,1686077703768 2023-06-06 18:55:03,850 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/hbase.id with ID: 6549d807-cf51-4177-b89f-e09b3ca46a1d 2023-06-06 18:55:03,862 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:55:03,865 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:55:03,879 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x0f43e2ba to 127.0.0.1:61092 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-06 18:55:03,884 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@51efc377, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-06 18:55:03,884 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-06 18:55:03,885 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-06 18:55:03,885 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-06 18:55:03,887 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/MasterData/data/master/store-tmp 2023-06-06 18:55:03,902 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:55:03,902 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-06 18:55:03,902 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:55:03,902 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:55:03,902 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-06 18:55:03,902 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:55:03,902 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:55:03,902 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-06 18:55:03,903 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/MasterData/WALs/jenkins-hbase20.apache.org,36267,1686077703768 2023-06-06 18:55:03,907 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C36267%2C1686077703768, suffix=, logDir=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/MasterData/WALs/jenkins-hbase20.apache.org,36267,1686077703768, archiveDir=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/MasterData/oldWALs, maxLogs=10 2023-06-06 18:55:03,917 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/MasterData/WALs/jenkins-hbase20.apache.org,36267,1686077703768/jenkins-hbase20.apache.org%2C36267%2C1686077703768.1686077703907 2023-06-06 18:55:03,917 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36895,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK], DatanodeInfoWithStorage[127.0.0.1:44661,DS-c698d852-4961-4c87-ba24-7936fea50fed,DISK]] 2023-06-06 18:55:03,917 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-06 18:55:03,917 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:55:03,917 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:55:03,917 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:55:03,923 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:55:03,925 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-06 18:55:03,925 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-06 18:55:03,926 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:55:03,927 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:55:03,927 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:55:03,930 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:55:03,932 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-06 18:55:03,932 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=710329, jitterRate=-0.09677091240882874}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-06 18:55:03,933 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-06 18:55:03,933 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-06 18:55:03,934 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-06 18:55:03,934 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-06 18:55:03,934 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-06 18:55:03,934 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-06 18:55:03,935 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-06 18:55:03,935 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-06 18:55:03,935 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-06 18:55:03,936 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-06 18:55:03,945 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-06 18:55:03,945 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-06 18:55:03,946 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-06 18:55:03,946 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-06 18:55:03,946 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-06 18:55:03,948 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:55:03,948 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-06 18:55:03,949 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-06 18:55:03,950 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-06 18:55:03,951 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-06 18:55:03,951 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): regionserver:37053-0x101c1c61e6f0001, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-06 18:55:03,951 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:55:03,951 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,36267,1686077703768, sessionid=0x101c1c61e6f0000, setting cluster-up flag (Was=false) 2023-06-06 18:55:03,955 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:55:03,958 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-06 18:55:03,963 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,36267,1686077703768 2023-06-06 18:55:03,974 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:55:03,977 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-06 18:55:03,978 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,36267,1686077703768 2023-06-06 18:55:03,978 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/.hbase-snapshot/.tmp 2023-06-06 18:55:03,985 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-06 18:55:03,986 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-06 18:55:03,986 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-06 18:55:03,986 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-06 18:55:03,986 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-06 18:55:03,986 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-06-06 18:55:03,986 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:55:03,986 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-06 18:55:03,986 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:55:03,991 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1686077733991 2023-06-06 18:55:03,991 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-06 18:55:03,992 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-06 18:55:03,992 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-06 18:55:03,992 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-06 18:55:03,992 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-06 18:55:03,992 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-06 18:55:03,992 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-06 18:55:03,993 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-06 18:55:03,993 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-06 18:55:03,993 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-06 18:55:03,994 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-06 18:55:03,994 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-06 18:55:03,994 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-06 18:55:03,994 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-06 18:55:03,994 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686077703994,5,FailOnTimeoutGroup] 2023-06-06 18:55:03,994 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686077703994,5,FailOnTimeoutGroup] 2023-06-06 18:55:03,994 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-06 18:55:03,994 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-06 18:55:03,995 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-06 18:55:03,995 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-06 18:55:03,995 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-06 18:55:04,029 INFO [RS:0;jenkins-hbase20:37053] regionserver.HRegionServer(951): ClusterId : 6549d807-cf51-4177-b89f-e09b3ca46a1d 2023-06-06 18:55:04,030 DEBUG [RS:0;jenkins-hbase20:37053] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-06 18:55:04,032 DEBUG [RS:0;jenkins-hbase20:37053] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-06 18:55:04,032 DEBUG [RS:0;jenkins-hbase20:37053] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-06 18:55:04,034 DEBUG [RS:0;jenkins-hbase20:37053] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-06 18:55:04,036 DEBUG [RS:0;jenkins-hbase20:37053] zookeeper.ReadOnlyZKClient(139): Connect 0x1f2c683a to 127.0.0.1:61092 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-06 18:55:04,040 DEBUG [RS:0;jenkins-hbase20:37053] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1b13ef07, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-06 18:55:04,040 DEBUG [RS:0;jenkins-hbase20:37053] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5e116201, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-06 18:55:04,050 DEBUG [RS:0;jenkins-hbase20:37053] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:37053 2023-06-06 18:55:04,050 INFO [RS:0;jenkins-hbase20:37053] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-06 18:55:04,050 INFO [RS:0;jenkins-hbase20:37053] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-06 18:55:04,050 DEBUG [RS:0;jenkins-hbase20:37053] regionserver.HRegionServer(1022): About to register with Master. 2023-06-06 18:55:04,051 INFO [RS:0;jenkins-hbase20:37053] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,36267,1686077703768 with isa=jenkins-hbase20.apache.org/148.251.75.209:37053, startcode=1686077703809 2023-06-06 18:55:04,051 DEBUG [RS:0;jenkins-hbase20:37053] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-06 18:55:04,056 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:52185, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-06-06 18:55:04,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,37053,1686077703809 2023-06-06 18:55:04,058 DEBUG [RS:0;jenkins-hbase20:37053] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5 2023-06-06 18:55:04,058 DEBUG [RS:0;jenkins-hbase20:37053] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:38445 2023-06-06 18:55:04,058 DEBUG [RS:0;jenkins-hbase20:37053] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-06 18:55:04,060 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-06 18:55:04,060 DEBUG [RS:0;jenkins-hbase20:37053] zookeeper.ZKUtil(162): regionserver:37053-0x101c1c61e6f0001, quorum=127.0.0.1:61092, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,37053,1686077703809 2023-06-06 18:55:04,061 WARN [RS:0;jenkins-hbase20:37053] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-06 18:55:04,061 INFO [RS:0;jenkins-hbase20:37053] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-06 18:55:04,061 DEBUG [RS:0;jenkins-hbase20:37053] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809 2023-06-06 18:55:04,061 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,37053,1686077703809] 2023-06-06 18:55:04,065 DEBUG [RS:0;jenkins-hbase20:37053] zookeeper.ZKUtil(162): regionserver:37053-0x101c1c61e6f0001, quorum=127.0.0.1:61092, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,37053,1686077703809 2023-06-06 18:55:04,066 DEBUG [RS:0;jenkins-hbase20:37053] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-06 18:55:04,067 INFO [RS:0;jenkins-hbase20:37053] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-06 18:55:04,072 INFO [RS:0;jenkins-hbase20:37053] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-06 18:55:04,072 INFO [RS:0;jenkins-hbase20:37053] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-06 18:55:04,073 INFO [RS:0;jenkins-hbase20:37053] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-06 18:55:04,074 INFO [RS:0;jenkins-hbase20:37053] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-06 18:55:04,075 INFO [RS:0;jenkins-hbase20:37053] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-06 18:55:04,076 DEBUG [RS:0;jenkins-hbase20:37053] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:55:04,076 DEBUG [RS:0;jenkins-hbase20:37053] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:55:04,076 DEBUG [RS:0;jenkins-hbase20:37053] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:55:04,076 DEBUG [RS:0;jenkins-hbase20:37053] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:55:04,076 DEBUG [RS:0;jenkins-hbase20:37053] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:55:04,076 DEBUG [RS:0;jenkins-hbase20:37053] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-06 18:55:04,076 DEBUG [RS:0;jenkins-hbase20:37053] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:55:04,076 DEBUG [RS:0;jenkins-hbase20:37053] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:55:04,077 DEBUG [RS:0;jenkins-hbase20:37053] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:55:04,077 DEBUG [RS:0;jenkins-hbase20:37053] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:55:04,078 INFO [RS:0;jenkins-hbase20:37053] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-06 18:55:04,078 INFO [RS:0;jenkins-hbase20:37053] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-06 18:55:04,078 INFO [RS:0;jenkins-hbase20:37053] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-06 18:55:04,089 INFO [RS:0;jenkins-hbase20:37053] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-06 18:55:04,089 INFO [RS:0;jenkins-hbase20:37053] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,37053,1686077703809-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-06 18:55:04,098 INFO [RS:0;jenkins-hbase20:37053] regionserver.Replication(203): jenkins-hbase20.apache.org,37053,1686077703809 started 2023-06-06 18:55:04,098 INFO [RS:0;jenkins-hbase20:37053] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,37053,1686077703809, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:37053, sessionid=0x101c1c61e6f0001 2023-06-06 18:55:04,098 DEBUG [RS:0;jenkins-hbase20:37053] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-06 18:55:04,098 DEBUG [RS:0;jenkins-hbase20:37053] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,37053,1686077703809 2023-06-06 18:55:04,098 DEBUG [RS:0;jenkins-hbase20:37053] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,37053,1686077703809' 2023-06-06 18:55:04,098 DEBUG [RS:0;jenkins-hbase20:37053] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-06 18:55:04,099 DEBUG [RS:0;jenkins-hbase20:37053] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-06 18:55:04,099 DEBUG [RS:0;jenkins-hbase20:37053] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-06 18:55:04,099 DEBUG [RS:0;jenkins-hbase20:37053] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-06 18:55:04,099 DEBUG [RS:0;jenkins-hbase20:37053] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,37053,1686077703809 2023-06-06 18:55:04,099 DEBUG [RS:0;jenkins-hbase20:37053] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,37053,1686077703809' 2023-06-06 18:55:04,099 DEBUG [RS:0;jenkins-hbase20:37053] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-06 18:55:04,100 DEBUG [RS:0;jenkins-hbase20:37053] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-06 18:55:04,100 DEBUG [RS:0;jenkins-hbase20:37053] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-06 18:55:04,100 INFO [RS:0;jenkins-hbase20:37053] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-06 18:55:04,100 INFO [RS:0;jenkins-hbase20:37053] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-06 18:55:04,202 INFO [RS:0;jenkins-hbase20:37053] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C37053%2C1686077703809, suffix=, logDir=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809, archiveDir=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/oldWALs, maxLogs=32 2023-06-06 18:55:04,212 INFO [RS:0;jenkins-hbase20:37053] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077704203 2023-06-06 18:55:04,212 DEBUG [RS:0;jenkins-hbase20:37053] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44661,DS-c698d852-4961-4c87-ba24-7936fea50fed,DISK], DatanodeInfoWithStorage[127.0.0.1:36895,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK]] 2023-06-06 18:55:04,409 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-06 18:55:04,411 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-06 18:55:04,411 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5 2023-06-06 18:55:04,425 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:55:04,427 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-06 18:55:04,428 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/hbase/meta/1588230740/info 2023-06-06 18:55:04,429 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-06 18:55:04,429 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:55:04,430 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-06 18:55:04,431 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/hbase/meta/1588230740/rep_barrier 2023-06-06 18:55:04,431 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-06 18:55:04,432 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:55:04,432 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-06 18:55:04,433 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/hbase/meta/1588230740/table 2023-06-06 18:55:04,433 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-06 18:55:04,434 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:55:04,435 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/hbase/meta/1588230740 2023-06-06 18:55:04,435 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/hbase/meta/1588230740 2023-06-06 18:55:04,438 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-06 18:55:04,440 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-06 18:55:04,445 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-06 18:55:04,446 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=840887, jitterRate=0.0692434310913086}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-06 18:55:04,446 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-06 18:55:04,446 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-06 18:55:04,446 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-06 18:55:04,446 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-06 18:55:04,446 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-06 18:55:04,446 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-06 18:55:04,448 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-06 18:55:04,448 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-06 18:55:04,449 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-06 18:55:04,450 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-06 18:55:04,450 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-06 18:55:04,452 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-06 18:55:04,454 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-06 18:55:04,604 DEBUG [jenkins-hbase20:36267] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-06 18:55:04,605 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,37053,1686077703809, state=OPENING 2023-06-06 18:55:04,606 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-06 18:55:04,607 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:55:04,608 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,37053,1686077703809}] 2023-06-06 18:55:04,609 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-06 18:55:04,765 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,37053,1686077703809 2023-06-06 18:55:04,765 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-06 18:55:04,768 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:47870, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-06 18:55:04,772 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-06 18:55:04,772 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-06 18:55:04,774 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C37053%2C1686077703809.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809, archiveDir=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/oldWALs, maxLogs=32 2023-06-06 18:55:04,786 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.meta.1686077704775.meta 2023-06-06 18:55:04,786 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36895,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK], DatanodeInfoWithStorage[127.0.0.1:44661,DS-c698d852-4961-4c87-ba24-7936fea50fed,DISK]] 2023-06-06 18:55:04,786 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-06 18:55:04,786 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-06 18:55:04,786 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-06 18:55:04,786 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-06 18:55:04,787 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-06 18:55:04,787 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:55:04,787 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-06 18:55:04,787 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-06 18:55:04,789 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-06 18:55:04,790 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/hbase/meta/1588230740/info 2023-06-06 18:55:04,790 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/hbase/meta/1588230740/info 2023-06-06 18:55:04,791 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-06 18:55:04,792 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:55:04,792 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-06 18:55:04,793 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/hbase/meta/1588230740/rep_barrier 2023-06-06 18:55:04,793 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/hbase/meta/1588230740/rep_barrier 2023-06-06 18:55:04,793 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-06 18:55:04,794 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:55:04,794 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-06 18:55:04,795 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/hbase/meta/1588230740/table 2023-06-06 18:55:04,795 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/hbase/meta/1588230740/table 2023-06-06 18:55:04,796 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-06 18:55:04,796 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:55:04,797 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/hbase/meta/1588230740 2023-06-06 18:55:04,798 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/hbase/meta/1588230740 2023-06-06 18:55:04,800 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-06 18:55:04,802 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-06 18:55:04,804 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=829864, jitterRate=0.05522716045379639}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-06 18:55:04,804 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-06 18:55:04,811 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1686077704765 2023-06-06 18:55:04,816 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-06 18:55:04,817 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-06 18:55:04,818 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,37053,1686077703809, state=OPEN 2023-06-06 18:55:04,820 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-06 18:55:04,820 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-06 18:55:04,824 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-06 18:55:04,824 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,37053,1686077703809 in 212 msec 2023-06-06 18:55:04,828 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-06 18:55:04,828 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 374 msec 2023-06-06 18:55:04,835 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 846 msec 2023-06-06 18:55:04,835 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1686077704835, completionTime=-1 2023-06-06 18:55:04,835 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-06 18:55:04,835 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-06 18:55:04,838 DEBUG [hconnection-0x10122bdf-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-06 18:55:04,841 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:47882, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-06 18:55:04,842 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-06 18:55:04,842 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1686077764842 2023-06-06 18:55:04,843 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1686077824842 2023-06-06 18:55:04,843 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 7 msec 2023-06-06 18:55:04,849 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36267,1686077703768-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-06 18:55:04,849 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36267,1686077703768-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-06 18:55:04,849 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36267,1686077703768-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-06 18:55:04,849 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:36267, period=300000, unit=MILLISECONDS is enabled. 2023-06-06 18:55:04,849 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-06 18:55:04,849 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-06 18:55:04,850 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-06 18:55:04,851 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-06 18:55:04,852 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-06 18:55:04,858 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-06 18:55:04,860 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-06 18:55:04,862 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/.tmp/data/hbase/namespace/0ebdb47376e9b8de4c8cf293e042c98a 2023-06-06 18:55:04,862 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/.tmp/data/hbase/namespace/0ebdb47376e9b8de4c8cf293e042c98a empty. 2023-06-06 18:55:04,863 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/.tmp/data/hbase/namespace/0ebdb47376e9b8de4c8cf293e042c98a 2023-06-06 18:55:04,863 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-06 18:55:04,876 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-06 18:55:04,878 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0ebdb47376e9b8de4c8cf293e042c98a, NAME => 'hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/.tmp 2023-06-06 18:55:04,885 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:55:04,885 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 0ebdb47376e9b8de4c8cf293e042c98a, disabling compactions & flushes 2023-06-06 18:55:04,885 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a. 2023-06-06 18:55:04,885 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a. 2023-06-06 18:55:04,885 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a. after waiting 0 ms 2023-06-06 18:55:04,886 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a. 2023-06-06 18:55:04,886 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a. 2023-06-06 18:55:04,886 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 0ebdb47376e9b8de4c8cf293e042c98a: 2023-06-06 18:55:04,888 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-06 18:55:04,889 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686077704889"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686077704889"}]},"ts":"1686077704889"} 2023-06-06 18:55:04,892 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-06 18:55:04,893 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-06 18:55:04,893 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686077704893"}]},"ts":"1686077704893"} 2023-06-06 18:55:04,895 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-06 18:55:04,899 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=0ebdb47376e9b8de4c8cf293e042c98a, ASSIGN}] 2023-06-06 18:55:04,902 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=0ebdb47376e9b8de4c8cf293e042c98a, ASSIGN 2023-06-06 18:55:04,903 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=0ebdb47376e9b8de4c8cf293e042c98a, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,37053,1686077703809; forceNewPlan=false, retain=false 2023-06-06 18:55:05,054 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=0ebdb47376e9b8de4c8cf293e042c98a, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,37053,1686077703809 2023-06-06 18:55:05,055 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686077705054"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686077705054"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686077705054"}]},"ts":"1686077705054"} 2023-06-06 18:55:05,057 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 0ebdb47376e9b8de4c8cf293e042c98a, server=jenkins-hbase20.apache.org,37053,1686077703809}] 2023-06-06 18:55:05,216 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a. 2023-06-06 18:55:05,216 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0ebdb47376e9b8de4c8cf293e042c98a, NAME => 'hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a.', STARTKEY => '', ENDKEY => ''} 2023-06-06 18:55:05,216 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 0ebdb47376e9b8de4c8cf293e042c98a 2023-06-06 18:55:05,216 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:55:05,216 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 0ebdb47376e9b8de4c8cf293e042c98a 2023-06-06 18:55:05,216 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 0ebdb47376e9b8de4c8cf293e042c98a 2023-06-06 18:55:05,218 INFO [StoreOpener-0ebdb47376e9b8de4c8cf293e042c98a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 0ebdb47376e9b8de4c8cf293e042c98a 2023-06-06 18:55:05,220 DEBUG [StoreOpener-0ebdb47376e9b8de4c8cf293e042c98a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/hbase/namespace/0ebdb47376e9b8de4c8cf293e042c98a/info 2023-06-06 18:55:05,220 DEBUG [StoreOpener-0ebdb47376e9b8de4c8cf293e042c98a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/hbase/namespace/0ebdb47376e9b8de4c8cf293e042c98a/info 2023-06-06 18:55:05,220 INFO [StoreOpener-0ebdb47376e9b8de4c8cf293e042c98a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0ebdb47376e9b8de4c8cf293e042c98a columnFamilyName info 2023-06-06 18:55:05,221 INFO [StoreOpener-0ebdb47376e9b8de4c8cf293e042c98a-1] regionserver.HStore(310): Store=0ebdb47376e9b8de4c8cf293e042c98a/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:55:05,222 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/hbase/namespace/0ebdb47376e9b8de4c8cf293e042c98a 2023-06-06 18:55:05,222 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/hbase/namespace/0ebdb47376e9b8de4c8cf293e042c98a 2023-06-06 18:55:05,225 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 0ebdb47376e9b8de4c8cf293e042c98a 2023-06-06 18:55:05,228 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/hbase/namespace/0ebdb47376e9b8de4c8cf293e042c98a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-06 18:55:05,229 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 0ebdb47376e9b8de4c8cf293e042c98a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=829388, jitterRate=0.054621756076812744}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-06 18:55:05,229 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 0ebdb47376e9b8de4c8cf293e042c98a: 2023-06-06 18:55:05,231 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a., pid=6, masterSystemTime=1686077705211 2023-06-06 18:55:05,233 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a. 2023-06-06 18:55:05,233 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a. 2023-06-06 18:55:05,234 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=0ebdb47376e9b8de4c8cf293e042c98a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,37053,1686077703809 2023-06-06 18:55:05,234 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686077705234"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686077705234"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686077705234"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686077705234"}]},"ts":"1686077705234"} 2023-06-06 18:55:05,239 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-06 18:55:05,240 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 0ebdb47376e9b8de4c8cf293e042c98a, server=jenkins-hbase20.apache.org,37053,1686077703809 in 179 msec 2023-06-06 18:55:05,242 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-06 18:55:05,243 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=0ebdb47376e9b8de4c8cf293e042c98a, ASSIGN in 341 msec 2023-06-06 18:55:05,243 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-06 18:55:05,244 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686077705244"}]},"ts":"1686077705244"} 2023-06-06 18:55:05,246 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-06 18:55:05,248 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-06 18:55:05,250 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 398 msec 2023-06-06 18:55:05,253 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-06 18:55:05,265 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-06 18:55:05,266 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:55:05,270 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-06 18:55:05,279 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-06 18:55:05,284 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-06-06 18:55:05,293 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-06 18:55:05,303 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-06 18:55:05,306 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 13 msec 2023-06-06 18:55:05,320 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-06 18:55:05,322 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-06 18:55:05,322 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.486sec 2023-06-06 18:55:05,322 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-06 18:55:05,323 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-06 18:55:05,323 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-06 18:55:05,323 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36267,1686077703768-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-06 18:55:05,324 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36267,1686077703768-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-06 18:55:05,326 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-06 18:55:05,331 DEBUG [Listener at localhost.localdomain/43891] zookeeper.ReadOnlyZKClient(139): Connect 0x5aeb8cca to 127.0.0.1:61092 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-06 18:55:05,337 DEBUG [Listener at localhost.localdomain/43891] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@42a906f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-06 18:55:05,340 DEBUG [hconnection-0x3cbf39a0-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-06 18:55:05,343 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:47888, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-06 18:55:05,345 INFO [Listener at localhost.localdomain/43891] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,36267,1686077703768 2023-06-06 18:55:05,345 INFO [Listener at localhost.localdomain/43891] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:55:05,350 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-06 18:55:05,350 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:55:05,351 INFO [Listener at localhost.localdomain/43891] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-06 18:55:05,351 INFO [Listener at localhost.localdomain/43891] wal.TestLogRolling(429): Starting testLogRollOnPipelineRestart 2023-06-06 18:55:05,351 INFO [Listener at localhost.localdomain/43891] wal.TestLogRolling(432): Replication=2 2023-06-06 18:55:05,353 DEBUG [Listener at localhost.localdomain/43891] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-06 18:55:05,358 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:42600, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-06 18:55:05,361 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-06 18:55:05,361 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-06 18:55:05,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-06 18:55:05,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart 2023-06-06 18:55:05,366 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_PRE_OPERATION 2023-06-06 18:55:05,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnPipelineRestart" procId is: 9 2023-06-06 18:55:05,367 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-06 18:55:05,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-06 18:55:05,372 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/dc10ecf4e656458b7e8716ba4184ceda 2023-06-06 18:55:05,373 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/dc10ecf4e656458b7e8716ba4184ceda empty. 2023-06-06 18:55:05,373 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/dc10ecf4e656458b7e8716ba4184ceda 2023-06-06 18:55:05,373 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnPipelineRestart regions 2023-06-06 18:55:05,385 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/.tabledesc/.tableinfo.0000000001 2023-06-06 18:55:05,387 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(7675): creating {ENCODED => dc10ecf4e656458b7e8716ba4184ceda, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/.tmp 2023-06-06 18:55:05,395 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:55:05,396 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1604): Closing dc10ecf4e656458b7e8716ba4184ceda, disabling compactions & flushes 2023-06-06 18:55:05,396 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda. 2023-06-06 18:55:05,396 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda. 2023-06-06 18:55:05,396 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda. after waiting 0 ms 2023-06-06 18:55:05,396 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda. 2023-06-06 18:55:05,396 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda. 2023-06-06 18:55:05,396 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1558): Region close journal for dc10ecf4e656458b7e8716ba4184ceda: 2023-06-06 18:55:05,399 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ADD_TO_META 2023-06-06 18:55:05,400 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1686077705400"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686077705400"}]},"ts":"1686077705400"} 2023-06-06 18:55:05,402 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-06 18:55:05,403 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-06 18:55:05,403 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686077705403"}]},"ts":"1686077705403"} 2023-06-06 18:55:05,405 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLING in hbase:meta 2023-06-06 18:55:05,408 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=dc10ecf4e656458b7e8716ba4184ceda, ASSIGN}] 2023-06-06 18:55:05,410 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=dc10ecf4e656458b7e8716ba4184ceda, ASSIGN 2023-06-06 18:55:05,411 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=dc10ecf4e656458b7e8716ba4184ceda, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,37053,1686077703809; forceNewPlan=false, retain=false 2023-06-06 18:55:05,562 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=dc10ecf4e656458b7e8716ba4184ceda, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,37053,1686077703809 2023-06-06 18:55:05,562 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1686077705562"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686077705562"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686077705562"}]},"ts":"1686077705562"} 2023-06-06 18:55:05,565 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure dc10ecf4e656458b7e8716ba4184ceda, server=jenkins-hbase20.apache.org,37053,1686077703809}] 2023-06-06 18:55:05,721 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda. 2023-06-06 18:55:05,721 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dc10ecf4e656458b7e8716ba4184ceda, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda.', STARTKEY => '', ENDKEY => ''} 2023-06-06 18:55:05,722 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnPipelineRestart dc10ecf4e656458b7e8716ba4184ceda 2023-06-06 18:55:05,722 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:55:05,722 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for dc10ecf4e656458b7e8716ba4184ceda 2023-06-06 18:55:05,722 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for dc10ecf4e656458b7e8716ba4184ceda 2023-06-06 18:55:05,724 INFO [StoreOpener-dc10ecf4e656458b7e8716ba4184ceda-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region dc10ecf4e656458b7e8716ba4184ceda 2023-06-06 18:55:05,726 DEBUG [StoreOpener-dc10ecf4e656458b7e8716ba4184ceda-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/default/TestLogRolling-testLogRollOnPipelineRestart/dc10ecf4e656458b7e8716ba4184ceda/info 2023-06-06 18:55:05,726 DEBUG [StoreOpener-dc10ecf4e656458b7e8716ba4184ceda-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/default/TestLogRolling-testLogRollOnPipelineRestart/dc10ecf4e656458b7e8716ba4184ceda/info 2023-06-06 18:55:05,726 INFO [StoreOpener-dc10ecf4e656458b7e8716ba4184ceda-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dc10ecf4e656458b7e8716ba4184ceda columnFamilyName info 2023-06-06 18:55:05,727 INFO [StoreOpener-dc10ecf4e656458b7e8716ba4184ceda-1] regionserver.HStore(310): Store=dc10ecf4e656458b7e8716ba4184ceda/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:55:05,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/default/TestLogRolling-testLogRollOnPipelineRestart/dc10ecf4e656458b7e8716ba4184ceda 2023-06-06 18:55:05,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/default/TestLogRolling-testLogRollOnPipelineRestart/dc10ecf4e656458b7e8716ba4184ceda 2023-06-06 18:55:05,731 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for dc10ecf4e656458b7e8716ba4184ceda 2023-06-06 18:55:05,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/data/default/TestLogRolling-testLogRollOnPipelineRestart/dc10ecf4e656458b7e8716ba4184ceda/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-06 18:55:05,734 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened dc10ecf4e656458b7e8716ba4184ceda; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=829270, jitterRate=0.054472386837005615}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-06 18:55:05,734 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for dc10ecf4e656458b7e8716ba4184ceda: 2023-06-06 18:55:05,735 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda., pid=11, masterSystemTime=1686077705717 2023-06-06 18:55:05,737 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda. 2023-06-06 18:55:05,737 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda. 2023-06-06 18:55:05,738 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=dc10ecf4e656458b7e8716ba4184ceda, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,37053,1686077703809 2023-06-06 18:55:05,738 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1686077705738"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686077705738"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686077705738"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686077705738"}]},"ts":"1686077705738"} 2023-06-06 18:55:05,742 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-06 18:55:05,742 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure dc10ecf4e656458b7e8716ba4184ceda, server=jenkins-hbase20.apache.org,37053,1686077703809 in 175 msec 2023-06-06 18:55:05,745 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-06 18:55:05,745 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=dc10ecf4e656458b7e8716ba4184ceda, ASSIGN in 334 msec 2023-06-06 18:55:05,746 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-06 18:55:05,746 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686077705746"}]},"ts":"1686077705746"} 2023-06-06 18:55:05,748 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLED in hbase:meta 2023-06-06 18:55:05,750 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_POST_OPERATION 2023-06-06 18:55:05,752 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart in 388 msec 2023-06-06 18:55:07,789 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-06 18:55:10,067 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnPipelineRestart' 2023-06-06 18:55:15,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-06 18:55:15,369 INFO [Listener at localhost.localdomain/43891] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnPipelineRestart, procId: 9 completed 2023-06-06 18:55:15,371 DEBUG [Listener at localhost.localdomain/43891] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnPipelineRestart 2023-06-06 18:55:15,372 DEBUG [Listener at localhost.localdomain/43891] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda. 2023-06-06 18:55:17,378 INFO [Listener at localhost.localdomain/43891] wal.TestLogRolling(469): log.getCurrentFileName()): hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077704203 2023-06-06 18:55:17,379 WARN [Listener at localhost.localdomain/43891] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-06 18:55:17,383 WARN [ResponseProcessor for block BP-618062800-148.251.75.209-1686077703244:blk_1073741831_1007] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-618062800-148.251.75.209-1686077703244:blk_1073741831_1007 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-06 18:55:17,383 WARN [ResponseProcessor for block BP-618062800-148.251.75.209-1686077703244:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-618062800-148.251.75.209-1686077703244:blk_1073741833_1009 java.io.IOException: Bad response ERROR for BP-618062800-148.251.75.209-1686077703244:blk_1073741833_1009 from datanode DatanodeInfoWithStorage[127.0.0.1:44661,DS-c698d852-4961-4c87-ba24-7936fea50fed,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-06 18:55:17,383 WARN [ResponseProcessor for block BP-618062800-148.251.75.209-1686077703244:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-618062800-148.251.75.209-1686077703244:blk_1073741829_1005 java.io.IOException: Bad response ERROR for BP-618062800-148.251.75.209-1686077703244:blk_1073741829_1005 from datanode DatanodeInfoWithStorage[127.0.0.1:44661,DS-c698d852-4961-4c87-ba24-7936fea50fed,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-06 18:55:17,384 WARN [DataStreamer for file /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.meta.1686077704775.meta block BP-618062800-148.251.75.209-1686077703244:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-618062800-148.251.75.209-1686077703244:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:36895,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK], DatanodeInfoWithStorage[127.0.0.1:44661,DS-c698d852-4961-4c87-ba24-7936fea50fed,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:44661,DS-c698d852-4961-4c87-ba24-7936fea50fed,DISK]) is bad. 2023-06-06 18:55:17,384 WARN [DataStreamer for file /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077704203 block BP-618062800-148.251.75.209-1686077703244:blk_1073741831_1007] hdfs.DataStreamer(1548): Error Recovery for BP-618062800-148.251.75.209-1686077703244:blk_1073741831_1007 in pipeline [DatanodeInfoWithStorage[127.0.0.1:44661,DS-c698d852-4961-4c87-ba24-7936fea50fed,DISK], DatanodeInfoWithStorage[127.0.0.1:36895,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:44661,DS-c698d852-4961-4c87-ba24-7936fea50fed,DISK]) is bad. 2023-06-06 18:55:17,384 WARN [DataStreamer for file /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/MasterData/WALs/jenkins-hbase20.apache.org,36267,1686077703768/jenkins-hbase20.apache.org%2C36267%2C1686077703768.1686077703907 block BP-618062800-148.251.75.209-1686077703244:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-618062800-148.251.75.209-1686077703244:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:36895,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK], DatanodeInfoWithStorage[127.0.0.1:44661,DS-c698d852-4961-4c87-ba24-7936fea50fed,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:44661,DS-c698d852-4961-4c87-ba24-7936fea50fed,DISK]) is bad. 2023-06-06 18:55:17,384 WARN [PacketResponder: BP-618062800-148.251.75.209-1686077703244:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:44661]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:55:17,384 WARN [PacketResponder: BP-618062800-148.251.75.209-1686077703244:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:44661]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:55:17,385 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_715058815_17 at /127.0.0.1:49394 [Receiving block BP-618062800-148.251.75.209-1686077703244:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:36895:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49394 dst: /127.0.0.1:36895 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:55:17,388 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_2036593883_17 at /127.0.0.1:49362 [Receiving block BP-618062800-148.251.75.209-1686077703244:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:36895:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49362 dst: /127.0.0.1:36895 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:55:17,391 INFO [Listener at localhost.localdomain/43891] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-06 18:55:17,395 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_715058815_17 at /127.0.0.1:49386 [Receiving block BP-618062800-148.251.75.209-1686077703244:blk_1073741831_1007]] datanode.DataXceiver(323): 127.0.0.1:36895:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49386 dst: /127.0.0.1:36895 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:36895 remote=/127.0.0.1:49386]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:55:17,395 WARN [PacketResponder: BP-618062800-148.251.75.209-1686077703244:blk_1073741831_1007, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:36895]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:55:17,397 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_715058815_17 at /127.0.0.1:35064 [Receiving block BP-618062800-148.251.75.209-1686077703244:blk_1073741831_1007]] datanode.DataXceiver(323): 127.0.0.1:44661:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35064 dst: /127.0.0.1:44661 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:55:17,496 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_715058815_17 at /127.0.0.1:35088 [Receiving block BP-618062800-148.251.75.209-1686077703244:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:44661:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35088 dst: /127.0.0.1:44661 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:55:17,498 WARN [BP-618062800-148.251.75.209-1686077703244 heartbeating to localhost.localdomain/127.0.0.1:38445] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-06 18:55:17,498 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_2036593883_17 at /127.0.0.1:35050 [Receiving block BP-618062800-148.251.75.209-1686077703244:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:44661:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35050 dst: /127.0.0.1:44661 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:55:17,499 WARN [BP-618062800-148.251.75.209-1686077703244 heartbeating to localhost.localdomain/127.0.0.1:38445] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-618062800-148.251.75.209-1686077703244 (Datanode Uuid db51f135-7174-4ae8-8fe1-03cc8ecac6ee) service to localhost.localdomain/127.0.0.1:38445 2023-06-06 18:55:17,500 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/cluster_98b1865e-0e9a-404b-75d5-f6c2adf7bbad/dfs/data/data3/current/BP-618062800-148.251.75.209-1686077703244] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:55:17,501 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/cluster_98b1865e-0e9a-404b-75d5-f6c2adf7bbad/dfs/data/data4/current/BP-618062800-148.251.75.209-1686077703244] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:55:17,507 WARN [Listener at localhost.localdomain/43891] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-06 18:55:17,510 WARN [Listener at localhost.localdomain/43891] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:55:17,512 INFO [Listener at localhost.localdomain/43891] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:55:17,518 INFO [Listener at localhost.localdomain/43891] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/java.io.tmpdir/Jetty_localhost_38293_datanode____pby3se/webapp 2023-06-06 18:55:17,603 INFO [Listener at localhost.localdomain/43891] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38293 2023-06-06 18:55:17,610 WARN [Listener at localhost.localdomain/35969] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:55:17,617 WARN [Listener at localhost.localdomain/35969] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-06 18:55:17,617 WARN [ResponseProcessor for block BP-618062800-148.251.75.209-1686077703244:blk_1073741833_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-618062800-148.251.75.209-1686077703244:blk_1073741833_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-06 18:55:17,617 WARN [ResponseProcessor for block BP-618062800-148.251.75.209-1686077703244:blk_1073741831_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-618062800-148.251.75.209-1686077703244:blk_1073741831_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-06 18:55:17,617 WARN [ResponseProcessor for block BP-618062800-148.251.75.209-1686077703244:blk_1073741829_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-618062800-148.251.75.209-1686077703244:blk_1073741829_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-06 18:55:17,623 INFO [Listener at localhost.localdomain/35969] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-06 18:55:17,674 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3ac8875d076de49a: Processing first storage report for DS-c698d852-4961-4c87-ba24-7936fea50fed from datanode db51f135-7174-4ae8-8fe1-03cc8ecac6ee 2023-06-06 18:55:17,675 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3ac8875d076de49a: from storage DS-c698d852-4961-4c87-ba24-7936fea50fed node DatanodeRegistration(127.0.0.1:46565, datanodeUuid=db51f135-7174-4ae8-8fe1-03cc8ecac6ee, infoPort=42815, infoSecurePort=0, ipcPort=35969, storageInfo=lv=-57;cid=testClusterID;nsid=784152401;c=1686077703244), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:55:17,675 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3ac8875d076de49a: Processing first storage report for DS-c495a97d-f562-42e9-be37-45e6bc2a2353 from datanode db51f135-7174-4ae8-8fe1-03cc8ecac6ee 2023-06-06 18:55:17,675 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3ac8875d076de49a: from storage DS-c495a97d-f562-42e9-be37-45e6bc2a2353 node DatanodeRegistration(127.0.0.1:46565, datanodeUuid=db51f135-7174-4ae8-8fe1-03cc8ecac6ee, infoPort=42815, infoSecurePort=0, ipcPort=35969, storageInfo=lv=-57;cid=testClusterID;nsid=784152401;c=1686077703244), blocks: 6, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-06 18:55:17,727 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_715058815_17 at /127.0.0.1:35226 [Receiving block BP-618062800-148.251.75.209-1686077703244:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:36895:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35226 dst: /127.0.0.1:36895 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:55:17,729 WARN [BP-618062800-148.251.75.209-1686077703244 heartbeating to localhost.localdomain/127.0.0.1:38445] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-06 18:55:17,729 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_2036593883_17 at /127.0.0.1:35214 [Receiving block BP-618062800-148.251.75.209-1686077703244:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:36895:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35214 dst: /127.0.0.1:36895 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:55:17,727 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_715058815_17 at /127.0.0.1:35238 [Receiving block BP-618062800-148.251.75.209-1686077703244:blk_1073741831_1007]] datanode.DataXceiver(323): 127.0.0.1:36895:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35238 dst: /127.0.0.1:36895 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:55:17,730 WARN [BP-618062800-148.251.75.209-1686077703244 heartbeating to localhost.localdomain/127.0.0.1:38445] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-618062800-148.251.75.209-1686077703244 (Datanode Uuid cd877cd2-20e0-4aa8-af19-c1d518339db6) service to localhost.localdomain/127.0.0.1:38445 2023-06-06 18:55:17,733 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/cluster_98b1865e-0e9a-404b-75d5-f6c2adf7bbad/dfs/data/data1/current/BP-618062800-148.251.75.209-1686077703244] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:55:17,734 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/cluster_98b1865e-0e9a-404b-75d5-f6c2adf7bbad/dfs/data/data2/current/BP-618062800-148.251.75.209-1686077703244] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:55:17,742 WARN [Listener at localhost.localdomain/35969] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-06 18:55:17,744 WARN [Listener at localhost.localdomain/35969] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:55:17,746 INFO [Listener at localhost.localdomain/35969] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:55:17,753 INFO [Listener at localhost.localdomain/35969] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/java.io.tmpdir/Jetty_localhost_38173_datanode____h4dsfj/webapp 2023-06-06 18:55:17,828 INFO [Listener at localhost.localdomain/35969] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38173 2023-06-06 18:55:17,835 WARN [Listener at localhost.localdomain/38443] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:55:17,893 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf3c6d553f8673c03: Processing first storage report for DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf from datanode cd877cd2-20e0-4aa8-af19-c1d518339db6 2023-06-06 18:55:17,893 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf3c6d553f8673c03: from storage DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf node DatanodeRegistration(127.0.0.1:40993, datanodeUuid=cd877cd2-20e0-4aa8-af19-c1d518339db6, infoPort=46289, infoSecurePort=0, ipcPort=38443, storageInfo=lv=-57;cid=testClusterID;nsid=784152401;c=1686077703244), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:55:17,893 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf3c6d553f8673c03: Processing first storage report for DS-255048da-a942-4dd6-b454-e9b728437cea from datanode cd877cd2-20e0-4aa8-af19-c1d518339db6 2023-06-06 18:55:17,893 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf3c6d553f8673c03: from storage DS-255048da-a942-4dd6-b454-e9b728437cea node DatanodeRegistration(127.0.0.1:40993, datanodeUuid=cd877cd2-20e0-4aa8-af19-c1d518339db6, infoPort=46289, infoSecurePort=0, ipcPort=38443, storageInfo=lv=-57;cid=testClusterID;nsid=784152401;c=1686077703244), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:55:18,839 INFO [Listener at localhost.localdomain/38443] wal.TestLogRolling(481): Data Nodes restarted 2023-06-06 18:55:18,841 INFO [Listener at localhost.localdomain/38443] wal.AbstractTestLogRolling(233): Validated row row1002 2023-06-06 18:55:18,842 WARN [RS:0;jenkins-hbase20:37053.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=5, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36895,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:18,844 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C37053%2C1686077703809:(num 1686077704203) roll requested 2023-06-06 18:55:18,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36895,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:18,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37053] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:47888 deadline: 1686077728842, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-06-06 18:55:18,859 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077704203 newFile=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077718844 2023-06-06 18:55:18,859 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-06-06 18:55:18,859 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077704203 with entries=5, filesize=2.11 KB; new WAL /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077718844 2023-06-06 18:55:18,860 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40993,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK], DatanodeInfoWithStorage[127.0.0.1:46565,DS-c698d852-4961-4c87-ba24-7936fea50fed,DISK]] 2023-06-06 18:55:18,860 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077704203 is not closed yet, will try archiving it next time 2023-06-06 18:55:18,860 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36895,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:18,860 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077704203; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36895,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:30,890 INFO [Listener at localhost.localdomain/38443] wal.AbstractTestLogRolling(233): Validated row row1003 2023-06-06 18:55:32,893 WARN [Listener at localhost.localdomain/38443] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-06 18:55:32,895 WARN [ResponseProcessor for block BP-618062800-148.251.75.209-1686077703244:blk_1073741838_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-618062800-148.251.75.209-1686077703244:blk_1073741838_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-06 18:55:32,896 WARN [DataStreamer for file /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077718844 block BP-618062800-148.251.75.209-1686077703244:blk_1073741838_1017] hdfs.DataStreamer(1548): Error Recovery for BP-618062800-148.251.75.209-1686077703244:blk_1073741838_1017 in pipeline [DatanodeInfoWithStorage[127.0.0.1:40993,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK], DatanodeInfoWithStorage[127.0.0.1:46565,DS-c698d852-4961-4c87-ba24-7936fea50fed,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:40993,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK]) is bad. 2023-06-06 18:55:32,902 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_715058815_17 at /127.0.0.1:60762 [Receiving block BP-618062800-148.251.75.209-1686077703244:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:46565:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:60762 dst: /127.0.0.1:46565 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:46565 remote=/127.0.0.1:60762]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:55:32,902 WARN [PacketResponder: BP-618062800-148.251.75.209-1686077703244:blk_1073741838_1017, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:46565]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:55:32,903 INFO [Listener at localhost.localdomain/38443] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-06 18:55:32,909 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_715058815_17 at /127.0.0.1:37080 [Receiving block BP-618062800-148.251.75.209-1686077703244:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:40993:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37080 dst: /127.0.0.1:40993 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:55:33,009 WARN [BP-618062800-148.251.75.209-1686077703244 heartbeating to localhost.localdomain/127.0.0.1:38445] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-06 18:55:33,009 WARN [BP-618062800-148.251.75.209-1686077703244 heartbeating to localhost.localdomain/127.0.0.1:38445] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-618062800-148.251.75.209-1686077703244 (Datanode Uuid cd877cd2-20e0-4aa8-af19-c1d518339db6) service to localhost.localdomain/127.0.0.1:38445 2023-06-06 18:55:33,010 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/cluster_98b1865e-0e9a-404b-75d5-f6c2adf7bbad/dfs/data/data1/current/BP-618062800-148.251.75.209-1686077703244] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:55:33,010 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/cluster_98b1865e-0e9a-404b-75d5-f6c2adf7bbad/dfs/data/data2/current/BP-618062800-148.251.75.209-1686077703244] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:55:33,019 WARN [Listener at localhost.localdomain/38443] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-06 18:55:33,022 WARN [Listener at localhost.localdomain/38443] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:55:33,023 INFO [Listener at localhost.localdomain/38443] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:55:33,028 INFO [Listener at localhost.localdomain/38443] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/java.io.tmpdir/Jetty_localhost_43711_datanode____7258kf/webapp 2023-06-06 18:55:33,100 INFO [Listener at localhost.localdomain/38443] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43711 2023-06-06 18:55:33,107 WARN [Listener at localhost.localdomain/38029] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:55:33,110 WARN [Listener at localhost.localdomain/38029] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-06 18:55:33,110 WARN [ResponseProcessor for block BP-618062800-148.251.75.209-1686077703244:blk_1073741838_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-618062800-148.251.75.209-1686077703244:blk_1073741838_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-06 18:55:33,163 INFO [Listener at localhost.localdomain/38029] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-06 18:55:33,214 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xcedf4e945a41c56a: Processing first storage report for DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf from datanode cd877cd2-20e0-4aa8-af19-c1d518339db6 2023-06-06 18:55:33,214 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xcedf4e945a41c56a: from storage DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf node DatanodeRegistration(127.0.0.1:44641, datanodeUuid=cd877cd2-20e0-4aa8-af19-c1d518339db6, infoPort=41401, infoSecurePort=0, ipcPort=38029, storageInfo=lv=-57;cid=testClusterID;nsid=784152401;c=1686077703244), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:55:33,214 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xcedf4e945a41c56a: Processing first storage report for DS-255048da-a942-4dd6-b454-e9b728437cea from datanode cd877cd2-20e0-4aa8-af19-c1d518339db6 2023-06-06 18:55:33,215 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xcedf4e945a41c56a: from storage DS-255048da-a942-4dd6-b454-e9b728437cea node DatanodeRegistration(127.0.0.1:44641, datanodeUuid=cd877cd2-20e0-4aa8-af19-c1d518339db6, infoPort=41401, infoSecurePort=0, ipcPort=38029, storageInfo=lv=-57;cid=testClusterID;nsid=784152401;c=1686077703244), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:55:33,267 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_715058815_17 at /127.0.0.1:49910 [Receiving block BP-618062800-148.251.75.209-1686077703244:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:46565:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49910 dst: /127.0.0.1:46565 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:55:33,270 WARN [BP-618062800-148.251.75.209-1686077703244 heartbeating to localhost.localdomain/127.0.0.1:38445] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-06 18:55:33,270 WARN [BP-618062800-148.251.75.209-1686077703244 heartbeating to localhost.localdomain/127.0.0.1:38445] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-618062800-148.251.75.209-1686077703244 (Datanode Uuid db51f135-7174-4ae8-8fe1-03cc8ecac6ee) service to localhost.localdomain/127.0.0.1:38445 2023-06-06 18:55:33,270 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/cluster_98b1865e-0e9a-404b-75d5-f6c2adf7bbad/dfs/data/data3/current/BP-618062800-148.251.75.209-1686077703244] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:55:33,270 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/cluster_98b1865e-0e9a-404b-75d5-f6c2adf7bbad/dfs/data/data4/current/BP-618062800-148.251.75.209-1686077703244] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:55:33,279 WARN [Listener at localhost.localdomain/38029] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-06 18:55:33,281 WARN [Listener at localhost.localdomain/38029] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:55:33,283 INFO [Listener at localhost.localdomain/38029] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:55:33,292 INFO [Listener at localhost.localdomain/38029] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/java.io.tmpdir/Jetty_localhost_34931_datanode____9dkz7h/webapp 2023-06-06 18:55:33,365 INFO [Listener at localhost.localdomain/38029] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34931 2023-06-06 18:55:33,378 WARN [Listener at localhost.localdomain/40081] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:55:33,429 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1de99562b70aa024: Processing first storage report for DS-c698d852-4961-4c87-ba24-7936fea50fed from datanode db51f135-7174-4ae8-8fe1-03cc8ecac6ee 2023-06-06 18:55:33,429 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1de99562b70aa024: from storage DS-c698d852-4961-4c87-ba24-7936fea50fed node DatanodeRegistration(127.0.0.1:38183, datanodeUuid=db51f135-7174-4ae8-8fe1-03cc8ecac6ee, infoPort=34213, infoSecurePort=0, ipcPort=40081, storageInfo=lv=-57;cid=testClusterID;nsid=784152401;c=1686077703244), blocks: 8, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-06 18:55:33,429 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1de99562b70aa024: Processing first storage report for DS-c495a97d-f562-42e9-be37-45e6bc2a2353 from datanode db51f135-7174-4ae8-8fe1-03cc8ecac6ee 2023-06-06 18:55:33,429 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1de99562b70aa024: from storage DS-c495a97d-f562-42e9-be37-45e6bc2a2353 node DatanodeRegistration(127.0.0.1:38183, datanodeUuid=db51f135-7174-4ae8-8fe1-03cc8ecac6ee, infoPort=34213, infoSecurePort=0, ipcPort=40081, storageInfo=lv=-57;cid=testClusterID;nsid=784152401;c=1686077703244), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:55:33,994 WARN [master/jenkins-hbase20:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36895,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:33,998 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C36267%2C1686077703768:(num 1686077703907) roll requested 2023-06-06 18:55:33,998 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36895,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:34,000 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36895,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:34,009 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-06-06 18:55:34,009 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/MasterData/WALs/jenkins-hbase20.apache.org,36267,1686077703768/jenkins-hbase20.apache.org%2C36267%2C1686077703768.1686077703907 with entries=88, filesize=43.82 KB; new WAL /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/MasterData/WALs/jenkins-hbase20.apache.org,36267,1686077703768/jenkins-hbase20.apache.org%2C36267%2C1686077703768.1686077733998 2023-06-06 18:55:34,009 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44641,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK], DatanodeInfoWithStorage[127.0.0.1:38183,DS-c698d852-4961-4c87-ba24-7936fea50fed,DISK]] 2023-06-06 18:55:34,009 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/MasterData/WALs/jenkins-hbase20.apache.org,36267,1686077703768/jenkins-hbase20.apache.org%2C36267%2C1686077703768.1686077703907 is not closed yet, will try archiving it next time 2023-06-06 18:55:34,009 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36895,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:34,010 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/MasterData/WALs/jenkins-hbase20.apache.org,36267,1686077703768/jenkins-hbase20.apache.org%2C36267%2C1686077703768.1686077703907; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36895,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:34,386 INFO [Listener at localhost.localdomain/40081] wal.TestLogRolling(498): Data Nodes restarted 2023-06-06 18:55:34,390 INFO [Listener at localhost.localdomain/40081] wal.AbstractTestLogRolling(233): Validated row row1004 2023-06-06 18:55:34,392 WARN [RS:0;jenkins-hbase20:37053.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=8, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:46565,DS-c698d852-4961-4c87-ba24-7936fea50fed,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:34,392 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C37053%2C1686077703809:(num 1686077718844) roll requested 2023-06-06 18:55:34,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:46565,DS-c698d852-4961-4c87-ba24-7936fea50fed,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:34,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37053] ipc.CallRunner(144): callId: 18 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:47888 deadline: 1686077744391, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-06-06 18:55:34,405 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077718844 newFile=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077734392 2023-06-06 18:55:34,405 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-06-06 18:55:34,405 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077718844 with entries=2, filesize=2.37 KB; new WAL /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077734392 2023-06-06 18:55:34,405 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38183,DS-c698d852-4961-4c87-ba24-7936fea50fed,DISK], DatanodeInfoWithStorage[127.0.0.1:44641,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK]] 2023-06-06 18:55:34,405 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:46565,DS-c698d852-4961-4c87-ba24-7936fea50fed,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:34,405 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077718844 is not closed yet, will try archiving it next time 2023-06-06 18:55:34,406 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077718844; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:46565,DS-c698d852-4961-4c87-ba24-7936fea50fed,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:46,424 DEBUG [Listener at localhost.localdomain/40081] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077734392 newFile=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077746407 2023-06-06 18:55:46,426 INFO [Listener at localhost.localdomain/40081] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077734392 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077746407 2023-06-06 18:55:46,431 DEBUG [Listener at localhost.localdomain/40081] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44641,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK], DatanodeInfoWithStorage[127.0.0.1:38183,DS-c698d852-4961-4c87-ba24-7936fea50fed,DISK]] 2023-06-06 18:55:46,431 DEBUG [Listener at localhost.localdomain/40081] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077734392 is not closed yet, will try archiving it next time 2023-06-06 18:55:46,432 DEBUG [Listener at localhost.localdomain/40081] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077704203 2023-06-06 18:55:46,433 INFO [Listener at localhost.localdomain/40081] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077704203 2023-06-06 18:55:46,438 WARN [IPC Server handler 3 on default port 38445] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077704203 has not been closed. Lease recovery is in progress. RecoveryId = 1022 for block blk_1073741831_1016 2023-06-06 18:55:46,441 INFO [Listener at localhost.localdomain/40081] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077704203 after 8ms 2023-06-06 18:55:47,466 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@85633cc] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-618062800-148.251.75.209-1686077703244:blk_1073741831_1016, datanode=DatanodeInfoWithStorage[127.0.0.1:38183,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741831_1016, replica=ReplicaWaitingToBeRecovered, blk_1073741831_1007, RWR getNumBytes() = 2162 getBytesOnDisk() = 2162 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/cluster_98b1865e-0e9a-404b-75d5-f6c2adf7bbad/dfs/data/data3/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/cluster_98b1865e-0e9a-404b-75d5-f6c2adf7bbad/dfs/data/data3/current/BP-618062800-148.251.75.209-1686077703244/current/rbw/blk_1073741831 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:55:50,442 INFO [Listener at localhost.localdomain/40081] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077704203 after 4009ms 2023-06-06 18:55:50,442 DEBUG [Listener at localhost.localdomain/40081] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077704203 2023-06-06 18:55:50,451 DEBUG [Listener at localhost.localdomain/40081] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1686077705229/Put/vlen=176/seqid=0] 2023-06-06 18:55:50,451 DEBUG [Listener at localhost.localdomain/40081] wal.TestLogRolling(522): #4: [default/info:d/1686077705275/Put/vlen=9/seqid=0] 2023-06-06 18:55:50,451 DEBUG [Listener at localhost.localdomain/40081] wal.TestLogRolling(522): #5: [hbase/info:d/1686077705300/Put/vlen=7/seqid=0] 2023-06-06 18:55:50,452 DEBUG [Listener at localhost.localdomain/40081] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1686077705734/Put/vlen=232/seqid=0] 2023-06-06 18:55:50,452 DEBUG [Listener at localhost.localdomain/40081] wal.TestLogRolling(522): #4: [row1002/info:/1686077715376/Put/vlen=1045/seqid=0] 2023-06-06 18:55:50,452 DEBUG [Listener at localhost.localdomain/40081] wal.ProtobufLogReader(420): EOF at position 2162 2023-06-06 18:55:50,452 DEBUG [Listener at localhost.localdomain/40081] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077718844 2023-06-06 18:55:50,452 INFO [Listener at localhost.localdomain/40081] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077718844 2023-06-06 18:55:50,453 WARN [IPC Server handler 1 on default port 38445] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077718844 has not been closed. Lease recovery is in progress. RecoveryId = 1023 for block blk_1073741838_1018 2023-06-06 18:55:50,453 INFO [Listener at localhost.localdomain/40081] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077718844 after 1ms 2023-06-06 18:55:51,436 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@22a44612] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-618062800-148.251.75.209-1686077703244:blk_1073741838_1018, datanode=DatanodeInfoWithStorage[127.0.0.1:44641,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/cluster_98b1865e-0e9a-404b-75d5-f6c2adf7bbad/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/cluster_98b1865e-0e9a-404b-75d5-f6c2adf7bbad/dfs/data/data1/current/BP-618062800-148.251.75.209-1686077703244/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:348) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/cluster_98b1865e-0e9a-404b-75d5-f6c2adf7bbad/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/cluster_98b1865e-0e9a-404b-75d5-f6c2adf7bbad/dfs/data/data1/current/BP-618062800-148.251.75.209-1686077703244/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy43.initReplicaRecovery(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolTranslatorPB.initReplicaRecovery(InterDatanodeProtocolTranslatorPB.java:83) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) ... 4 more 2023-06-06 18:55:54,454 INFO [Listener at localhost.localdomain/40081] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077718844 after 4002ms 2023-06-06 18:55:54,454 DEBUG [Listener at localhost.localdomain/40081] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077718844 2023-06-06 18:55:54,458 DEBUG [Listener at localhost.localdomain/40081] wal.TestLogRolling(522): #6: [row1003/info:/1686077728884/Put/vlen=1045/seqid=0] 2023-06-06 18:55:54,459 DEBUG [Listener at localhost.localdomain/40081] wal.TestLogRolling(522): #7: [row1004/info:/1686077730891/Put/vlen=1045/seqid=0] 2023-06-06 18:55:54,459 DEBUG [Listener at localhost.localdomain/40081] wal.ProtobufLogReader(420): EOF at position 2425 2023-06-06 18:55:54,459 DEBUG [Listener at localhost.localdomain/40081] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077734392 2023-06-06 18:55:54,459 INFO [Listener at localhost.localdomain/40081] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077734392 2023-06-06 18:55:54,460 INFO [Listener at localhost.localdomain/40081] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077734392 after 0ms 2023-06-06 18:55:54,460 DEBUG [Listener at localhost.localdomain/40081] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077734392 2023-06-06 18:55:54,463 DEBUG [Listener at localhost.localdomain/40081] wal.TestLogRolling(522): #9: [row1005/info:/1686077744401/Put/vlen=1045/seqid=0] 2023-06-06 18:55:54,464 DEBUG [Listener at localhost.localdomain/40081] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077746407 2023-06-06 18:55:54,464 INFO [Listener at localhost.localdomain/40081] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077746407 2023-06-06 18:55:54,464 WARN [IPC Server handler 0 on default port 38445] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077746407 has not been closed. Lease recovery is in progress. RecoveryId = 1024 for block blk_1073741841_1021 2023-06-06 18:55:54,465 INFO [Listener at localhost.localdomain/40081] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077746407 after 1ms 2023-06-06 18:55:55,435 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_2036593883_17 at /127.0.0.1:36568 [Receiving block BP-618062800-148.251.75.209-1686077703244:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:44641:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:36568 dst: /127.0.0.1:44641 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:44641 remote=/127.0.0.1:36568]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:55:55,439 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_2036593883_17 at /127.0.0.1:40760 [Receiving block BP-618062800-148.251.75.209-1686077703244:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:38183:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:40760 dst: /127.0.0.1:38183 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:55:55,437 WARN [ResponseProcessor for block BP-618062800-148.251.75.209-1686077703244:blk_1073741841_1021] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-618062800-148.251.75.209-1686077703244:blk_1073741841_1021 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-06 18:55:55,440 WARN [DataStreamer for file /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077746407 block BP-618062800-148.251.75.209-1686077703244:blk_1073741841_1021] hdfs.DataStreamer(1548): Error Recovery for BP-618062800-148.251.75.209-1686077703244:blk_1073741841_1021 in pipeline [DatanodeInfoWithStorage[127.0.0.1:44641,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK], DatanodeInfoWithStorage[127.0.0.1:38183,DS-c698d852-4961-4c87-ba24-7936fea50fed,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:44641,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK]) is bad. 2023-06-06 18:55:55,449 WARN [DataStreamer for file /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077746407 block BP-618062800-148.251.75.209-1686077703244:blk_1073741841_1021] hdfs.DataStreamer(823): DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-618062800-148.251.75.209-1686077703244:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy31.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:58,466 INFO [Listener at localhost.localdomain/40081] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077746407 after 4002ms 2023-06-06 18:55:58,466 DEBUG [Listener at localhost.localdomain/40081] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077746407 2023-06-06 18:55:58,474 DEBUG [Listener at localhost.localdomain/40081] wal.ProtobufLogReader(420): EOF at position 83 2023-06-06 18:55:58,475 INFO [Listener at localhost.localdomain/40081] regionserver.HRegion(2745): Flushing dc10ecf4e656458b7e8716ba4184ceda 1/1 column families, dataSize=4.20 KB heapSize=4.75 KB 2023-06-06 18:55:58,478 WARN [RS:0;jenkins-hbase20:37053.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=11, requesting roll of WAL org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-618062800-148.251.75.209-1686077703244:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy31.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:58,478 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C37053%2C1686077703809:(num 1686077746407) roll requested 2023-06-06 18:55:58,478 DEBUG [Listener at localhost.localdomain/40081] regionserver.HRegion(2446): Flush status journal for dc10ecf4e656458b7e8716ba4184ceda: 2023-06-06 18:55:58,479 INFO [Listener at localhost.localdomain/40081] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-618062800-148.251.75.209-1686077703244:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy31.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:58,482 INFO [Listener at localhost.localdomain/40081] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.96 KB heapSize=5.48 KB 2023-06-06 18:55:58,482 WARN [RS_OPEN_META-regionserver/jenkins-hbase20:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36895,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:58,483 DEBUG [Listener at localhost.localdomain/40081] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-06-06 18:55:58,483 INFO [Listener at localhost.localdomain/40081] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36895,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:58,486 INFO [Listener at localhost.localdomain/40081] regionserver.HRegion(2745): Flushing 0ebdb47376e9b8de4c8cf293e042c98a 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-06 18:55:58,486 DEBUG [Listener at localhost.localdomain/40081] regionserver.HRegion(2446): Flush status journal for 0ebdb47376e9b8de4c8cf293e042c98a: 2023-06-06 18:55:58,486 INFO [Listener at localhost.localdomain/40081] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-618062800-148.251.75.209-1686077703244:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy31.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:58,489 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-06 18:55:58,489 INFO [Listener at localhost.localdomain/40081] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-06-06 18:55:58,489 DEBUG [Listener at localhost.localdomain/40081] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5aeb8cca to 127.0.0.1:61092 2023-06-06 18:55:58,489 DEBUG [Listener at localhost.localdomain/40081] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:55:58,489 DEBUG [Listener at localhost.localdomain/40081] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-06 18:55:58,489 DEBUG [Listener at localhost.localdomain/40081] util.JVMClusterUtil(257): Found active master hash=1802645426, stopped=false 2023-06-06 18:55:58,494 INFO [Listener at localhost.localdomain/40081] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,36267,1686077703768 2023-06-06 18:55:58,497 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-06 18:55:58,497 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): regionserver:37053-0x101c1c61e6f0001, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-06 18:55:58,497 INFO [Listener at localhost.localdomain/40081] procedure2.ProcedureExecutor(629): Stopping 2023-06-06 18:55:58,497 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:55:58,497 DEBUG [Listener at localhost.localdomain/40081] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0f43e2ba to 127.0.0.1:61092 2023-06-06 18:55:58,498 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37053-0x101c1c61e6f0001, quorum=127.0.0.1:61092, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:55:58,498 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:55:58,498 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077746407 newFile=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077758478 2023-06-06 18:55:58,498 DEBUG [Listener at localhost.localdomain/40081] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:55:58,498 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL 2023-06-06 18:55:58,498 INFO [Listener at localhost.localdomain/40081] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,37053,1686077703809' ***** 2023-06-06 18:55:58,498 INFO [Listener at localhost.localdomain/40081] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-06 18:55:58,498 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077746407 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077758478 2023-06-06 18:55:58,498 INFO [RS:0;jenkins-hbase20:37053] regionserver.HeapMemoryManager(220): Stopping 2023-06-06 18:55:58,498 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-618062800-148.251.75.209-1686077703244:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy31.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:58,499 INFO [RS:0;jenkins-hbase20:37053] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-06 18:55:58,499 ERROR [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(462): Close of WAL hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077746407 failed. Cause="Unexpected BlockUCState: BP-618062800-148.251.75.209-1686077703244:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) ", errors=3, hasUnflushedEntries=false 2023-06-06 18:55:58,499 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-06 18:55:58,499 ERROR [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(426): Failed close of WAL writer hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077746407, unflushedEntries=0 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-618062800-148.251.75.209-1686077703244:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy31.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:58,499 INFO [RS:0;jenkins-hbase20:37053] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-06 18:55:58,500 ERROR [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(221): Roll wal failed and waiting timeout, will not retry org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077746407, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-618062800-148.251.75.209-1686077703244:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy31.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:58,500 INFO [RS:0;jenkins-hbase20:37053] regionserver.HRegionServer(3303): Received CLOSE for dc10ecf4e656458b7e8716ba4184ceda 2023-06-06 18:55:58,500 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809 2023-06-06 18:55:58,501 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36895,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:58,502 INFO [RS:0;jenkins-hbase20:37053] regionserver.HRegionServer(3303): Received CLOSE for 0ebdb47376e9b8de4c8cf293e042c98a 2023-06-06 18:55:58,502 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(165): Failed to shutdown wal java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36895,DS-d26a9ca2-a059-41b0-8e33-f17f998be5bf,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:58,502 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing dc10ecf4e656458b7e8716ba4184ceda, disabling compactions & flushes 2023-06-06 18:55:58,502 INFO [RS:0;jenkins-hbase20:37053] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,37053,1686077703809 2023-06-06 18:55:58,502 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda. 2023-06-06 18:55:58,502 DEBUG [RS:0;jenkins-hbase20:37053] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1f2c683a to 127.0.0.1:61092 2023-06-06 18:55:58,503 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809 2023-06-06 18:55:58,503 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda. 2023-06-06 18:55:58,504 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:324) at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:151) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:105) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.writeWALTrailerAndMagic(ProtobufLogWriter.java:140) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.writeWALTrailer(AbstractProtobufLogWriter.java:234) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.close(ProtobufLogWriter.java:67) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doShutdown(FSHLog.java:492) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:951) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:946) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-06-06 18:55:58,503 DEBUG [RS:0;jenkins-hbase20:37053] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:55:58,504 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda. after waiting 0 ms 2023-06-06 18:55:58,504 INFO [RS:0;jenkins-hbase20:37053] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-06 18:55:58,504 ERROR [regionserver/jenkins-hbase20:0.logRoller] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase20.apache.org,37053,1686077703809: Failed log close in log roller ***** org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077746407, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-618062800-148.251.75.209-1686077703244:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy31.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:58,504 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda. 2023-06-06 18:55:58,504 ERROR [regionserver/jenkins-hbase20:0.logRoller] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-06-06 18:55:58,504 INFO [RS:0;jenkins-hbase20:37053] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-06 18:55:58,504 DEBUG [regionserver/jenkins-hbase20:0.logRoller] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-06-06 18:55:58,504 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for dc10ecf4e656458b7e8716ba4184ceda: 2023-06-06 18:55:58,506 INFO [RS:0;jenkins-hbase20:37053] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-06 18:55:58,506 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda. 2023-06-06 18:55:58,506 INFO [RS:0;jenkins-hbase20:37053] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-06 18:55:58,506 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 0ebdb47376e9b8de4c8cf293e042c98a, disabling compactions & flushes 2023-06-06 18:55:58,506 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a. 2023-06-06 18:55:58,506 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a. 2023-06-06 18:55:58,506 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a. after waiting 0 ms 2023-06-06 18:55:58,506 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a. 2023-06-06 18:55:58,506 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 0ebdb47376e9b8de4c8cf293e042c98a: 2023-06-06 18:55:58,506 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a. 2023-06-06 18:55:58,506 DEBUG [regionserver/jenkins-hbase20:0.logRoller] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-06-06 18:55:58,506 DEBUG [regionserver/jenkins-hbase20:0.logRoller] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-06-06 18:55:58,507 DEBUG [regionserver/jenkins-hbase20:0.logRoller] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-06-06 18:55:58,507 INFO [regionserver/jenkins-hbase20:0.logRoller] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1086324736, "init": 524288000, "max": 2051014656, "used": 354784200 }, "NonHeapMemoryUsage": { "committed": 138764288, "init": 2555904, "max": -1, "used": 136227856 }, "Verbose": false, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-06-06 18:55:58,507 INFO [RS:0;jenkins-hbase20:37053] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-06-06 18:55:58,507 DEBUG [RS:0;jenkins-hbase20:37053] regionserver.HRegionServer(1478): Online Regions={dc10ecf4e656458b7e8716ba4184ceda=TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda., 1588230740=hbase:meta,,1.1588230740, 0ebdb47376e9b8de4c8cf293e042c98a=hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a.} 2023-06-06 18:55:58,507 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-06 18:55:58,507 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-06 18:55:58,507 INFO [RS:0;jenkins-hbase20:37053] regionserver.HRegionServer(3303): Received CLOSE for dc10ecf4e656458b7e8716ba4184ceda 2023-06-06 18:55:58,507 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-06 18:55:58,507 INFO [RS:0;jenkins-hbase20:37053] regionserver.HRegionServer(3303): Received CLOSE for 0ebdb47376e9b8de4c8cf293e042c98a 2023-06-06 18:55:58,507 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-06 18:55:58,507 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-06 18:55:58,507 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing dc10ecf4e656458b7e8716ba4184ceda, disabling compactions & flushes 2023-06-06 18:55:58,507 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda. 2023-06-06 18:55:58,507 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda. 2023-06-06 18:55:58,508 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda. after waiting 0 ms 2023-06-06 18:55:58,508 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda. 2023-06-06 18:55:58,507 DEBUG [RS:0;jenkins-hbase20:37053] regionserver.HRegionServer(1504): Waiting on 0ebdb47376e9b8de4c8cf293e042c98a, 1588230740, dc10ecf4e656458b7e8716ba4184ceda 2023-06-06 18:55:58,507 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(609): jenkins-hbase20.apache.org,37053,1686077703809 reported a fatal error: ***** ABORTING region server jenkins-hbase20.apache.org,37053,1686077703809: Failed log close in log roller ***** Cause: org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/WALs/jenkins-hbase20.apache.org,37053,1686077703809/jenkins-hbase20.apache.org%2C37053%2C1686077703809.1686077746407, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-618062800-148.251.75.209-1686077703244:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy31.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-06 18:55:58,508 ERROR [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1825): Memstore data size is 3028 in region hbase:meta,,1.1588230740 2023-06-06 18:55:58,508 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1825): Memstore data size is 4304 in region TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda. 2023-06-06 18:55:58,508 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-06 18:55:58,508 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda. 2023-06-06 18:55:58,508 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-06 18:55:58,508 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for dc10ecf4e656458b7e8716ba4184ceda: 2023-06-06 18:55:58,509 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-06 18:55:58,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRollOnPipelineRestart,,1686077705360.dc10ecf4e656458b7e8716ba4184ceda. 2023-06-06 18:55:58,509 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C37053%2C1686077703809.meta:.meta(num 1686077704775) roll requested 2023-06-06 18:55:58,509 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(874): WAL closed. Skipping rolling of writer 2023-06-06 18:55:58,509 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-06 18:55:58,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 0ebdb47376e9b8de4c8cf293e042c98a, disabling compactions & flushes 2023-06-06 18:55:58,509 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a. 2023-06-06 18:55:58,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a. 2023-06-06 18:55:58,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a. after waiting 0 ms 2023-06-06 18:55:58,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a. 2023-06-06 18:55:58,509 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1825): Memstore data size is 78 in region hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a. 2023-06-06 18:55:58,510 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a. 2023-06-06 18:55:58,510 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 0ebdb47376e9b8de4c8cf293e042c98a: 2023-06-06 18:55:58,510 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1686077704850.0ebdb47376e9b8de4c8cf293e042c98a. 2023-06-06 18:55:58,708 INFO [RS:0;jenkins-hbase20:37053] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,37053,1686077703809; all regions closed. 2023-06-06 18:55:58,708 DEBUG [RS:0;jenkins-hbase20:37053] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:55:58,708 INFO [RS:0;jenkins-hbase20:37053] regionserver.LeaseManager(133): Closed leases 2023-06-06 18:55:58,708 INFO [RS:0;jenkins-hbase20:37053] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-06 18:55:58,709 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-06 18:55:58,710 INFO [RS:0;jenkins-hbase20:37053] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:37053 2023-06-06 18:55:58,714 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): regionserver:37053-0x101c1c61e6f0001, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,37053,1686077703809 2023-06-06 18:55:58,714 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-06 18:55:58,714 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): regionserver:37053-0x101c1c61e6f0001, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-06 18:55:58,716 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,37053,1686077703809] 2023-06-06 18:55:58,716 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,37053,1686077703809; numProcessing=1 2023-06-06 18:55:58,717 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,37053,1686077703809 already deleted, retry=false 2023-06-06 18:55:58,717 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,37053,1686077703809 expired; onlineServers=0 2023-06-06 18:55:58,717 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,36267,1686077703768' ***** 2023-06-06 18:55:58,717 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-06 18:55:58,718 DEBUG [M:0;jenkins-hbase20:36267] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6f4db604, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-06 18:55:58,718 INFO [M:0;jenkins-hbase20:36267] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,36267,1686077703768 2023-06-06 18:55:58,718 INFO [M:0;jenkins-hbase20:36267] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,36267,1686077703768; all regions closed. 2023-06-06 18:55:58,718 DEBUG [M:0;jenkins-hbase20:36267] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:55:58,718 DEBUG [M:0;jenkins-hbase20:36267] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-06 18:55:58,718 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-06 18:55:58,718 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686077703994] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686077703994,5,FailOnTimeoutGroup] 2023-06-06 18:55:58,718 DEBUG [M:0;jenkins-hbase20:36267] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-06 18:55:58,718 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686077703994] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686077703994,5,FailOnTimeoutGroup] 2023-06-06 18:55:58,720 INFO [M:0;jenkins-hbase20:36267] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-06 18:55:58,721 INFO [M:0;jenkins-hbase20:36267] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-06 18:55:58,721 INFO [M:0;jenkins-hbase20:36267] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-06-06 18:55:58,721 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-06 18:55:58,721 DEBUG [M:0;jenkins-hbase20:36267] master.HMaster(1512): Stopping service threads 2023-06-06 18:55:58,721 INFO [M:0;jenkins-hbase20:36267] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-06 18:55:58,721 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:55:58,721 ERROR [M:0;jenkins-hbase20:36267] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-06 18:55:58,722 INFO [M:0;jenkins-hbase20:36267] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-06 18:55:58,722 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-06 18:55:58,722 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-06 18:55:58,722 DEBUG [M:0;jenkins-hbase20:36267] zookeeper.ZKUtil(398): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-06 18:55:58,722 WARN [M:0;jenkins-hbase20:36267] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-06 18:55:58,722 INFO [M:0;jenkins-hbase20:36267] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-06 18:55:58,723 INFO [M:0;jenkins-hbase20:36267] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-06 18:55:58,723 DEBUG [M:0;jenkins-hbase20:36267] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-06 18:55:58,724 INFO [M:0;jenkins-hbase20:36267] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:55:58,724 DEBUG [M:0;jenkins-hbase20:36267] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:55:58,724 DEBUG [M:0;jenkins-hbase20:36267] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-06 18:55:58,724 DEBUG [M:0;jenkins-hbase20:36267] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:55:58,724 INFO [M:0;jenkins-hbase20:36267] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.20 KB heapSize=45.83 KB 2023-06-06 18:55:58,740 INFO [M:0;jenkins-hbase20:36267] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.20 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/d490021708414b9999458bfb398d09e7 2023-06-06 18:55:58,745 DEBUG [M:0;jenkins-hbase20:36267] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/d490021708414b9999458bfb398d09e7 as hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/d490021708414b9999458bfb398d09e7 2023-06-06 18:55:58,750 INFO [M:0;jenkins-hbase20:36267] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38445/user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/d490021708414b9999458bfb398d09e7, entries=11, sequenceid=92, filesize=7.0 K 2023-06-06 18:55:58,751 INFO [M:0;jenkins-hbase20:36267] regionserver.HRegion(2948): Finished flush of dataSize ~38.20 KB/39113, heapSize ~45.81 KB/46912, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 27ms, sequenceid=92, compaction requested=false 2023-06-06 18:55:58,752 INFO [M:0;jenkins-hbase20:36267] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:55:58,752 DEBUG [M:0;jenkins-hbase20:36267] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-06 18:55:58,753 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/82f3b5d6-c474-be6c-409c-26f55512f0d5/MasterData/WALs/jenkins-hbase20.apache.org,36267,1686077703768 2023-06-06 18:55:58,757 INFO [M:0;jenkins-hbase20:36267] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-06 18:55:58,757 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-06 18:55:58,758 INFO [M:0;jenkins-hbase20:36267] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:36267 2023-06-06 18:55:58,760 DEBUG [M:0;jenkins-hbase20:36267] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,36267,1686077703768 already deleted, retry=false 2023-06-06 18:55:58,898 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:55:58,899 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): master:36267-0x101c1c61e6f0000, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:55:58,899 INFO [M:0;jenkins-hbase20:36267] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,36267,1686077703768; zookeeper connection closed. 2023-06-06 18:55:58,999 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): regionserver:37053-0x101c1c61e6f0001, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:55:58,999 DEBUG [Listener at localhost.localdomain/43891-EventThread] zookeeper.ZKWatcher(600): regionserver:37053-0x101c1c61e6f0001, quorum=127.0.0.1:61092, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:55:58,999 INFO [RS:0;jenkins-hbase20:37053] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,37053,1686077703809; zookeeper connection closed. 2023-06-06 18:55:59,000 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@73303b6f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@73303b6f 2023-06-06 18:55:59,003 INFO [Listener at localhost.localdomain/40081] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-06 18:55:59,004 WARN [Listener at localhost.localdomain/40081] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-06 18:55:59,009 INFO [Listener at localhost.localdomain/40081] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-06 18:55:59,115 WARN [BP-618062800-148.251.75.209-1686077703244 heartbeating to localhost.localdomain/127.0.0.1:38445] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-06 18:55:59,115 WARN [BP-618062800-148.251.75.209-1686077703244 heartbeating to localhost.localdomain/127.0.0.1:38445] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-618062800-148.251.75.209-1686077703244 (Datanode Uuid db51f135-7174-4ae8-8fe1-03cc8ecac6ee) service to localhost.localdomain/127.0.0.1:38445 2023-06-06 18:55:59,116 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/cluster_98b1865e-0e9a-404b-75d5-f6c2adf7bbad/dfs/data/data3/current/BP-618062800-148.251.75.209-1686077703244] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:55:59,117 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/cluster_98b1865e-0e9a-404b-75d5-f6c2adf7bbad/dfs/data/data4/current/BP-618062800-148.251.75.209-1686077703244] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:55:59,120 WARN [Listener at localhost.localdomain/40081] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-06 18:55:59,124 INFO [Listener at localhost.localdomain/40081] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-06 18:55:59,216 WARN [BP-618062800-148.251.75.209-1686077703244 heartbeating to localhost.localdomain/127.0.0.1:38445] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-618062800-148.251.75.209-1686077703244 (Datanode Uuid cd877cd2-20e0-4aa8-af19-c1d518339db6) service to localhost.localdomain/127.0.0.1:38445 2023-06-06 18:55:59,217 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/cluster_98b1865e-0e9a-404b-75d5-f6c2adf7bbad/dfs/data/data1/current/BP-618062800-148.251.75.209-1686077703244] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:55:59,218 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/cluster_98b1865e-0e9a-404b-75d5-f6c2adf7bbad/dfs/data/data2/current/BP-618062800-148.251.75.209-1686077703244] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:55:59,245 INFO [Listener at localhost.localdomain/40081] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-06-06 18:55:59,363 INFO [Listener at localhost.localdomain/40081] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-06 18:55:59,375 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-06 18:55:59,383 INFO [Listener at localhost.localdomain/40081] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=86 (was 75) Potentially hanging thread: nioEventLoopGroup-26-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1961072667) connection to localhost.localdomain/127.0.0.1:38445 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Listener at localhost.localdomain/40081 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost.localdomain:38445 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:38445 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1961072667) connection to localhost.localdomain/127.0.0.1:38445 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-28-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1961072667) connection to localhost.localdomain/127.0.0.1:38445 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=459 (was 458) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=73 (was 107), ProcessCount=170 (was 170), AvailableMemoryMB=4970 (was 5917) 2023-06-06 18:55:59,391 INFO [Listener at localhost.localdomain/40081] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=86, OpenFileDescriptor=459, MaxFileDescriptor=60000, SystemLoadAverage=73, ProcessCount=170, AvailableMemoryMB=4969 2023-06-06 18:55:59,391 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-06 18:55:59,392 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/hadoop.log.dir so I do NOT create it in target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e 2023-06-06 18:55:59,392 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b0759bad-17de-c03d-decf-04223cf518c6/hadoop.tmp.dir so I do NOT create it in target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e 2023-06-06 18:55:59,392 INFO [Listener at localhost.localdomain/40081] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/cluster_319be9c4-03b4-54cb-6941-a69454e072a8, deleteOnExit=true 2023-06-06 18:55:59,392 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-06 18:55:59,392 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/test.cache.data in system properties and HBase conf 2023-06-06 18:55:59,392 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/hadoop.tmp.dir in system properties and HBase conf 2023-06-06 18:55:59,392 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/hadoop.log.dir in system properties and HBase conf 2023-06-06 18:55:59,392 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-06 18:55:59,392 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-06 18:55:59,392 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-06 18:55:59,393 DEBUG [Listener at localhost.localdomain/40081] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-06 18:55:59,393 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-06 18:55:59,393 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-06 18:55:59,393 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-06 18:55:59,393 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-06 18:55:59,393 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-06 18:55:59,393 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-06 18:55:59,393 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-06 18:55:59,394 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-06 18:55:59,394 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-06 18:55:59,394 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/nfs.dump.dir in system properties and HBase conf 2023-06-06 18:55:59,394 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/java.io.tmpdir in system properties and HBase conf 2023-06-06 18:55:59,394 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-06 18:55:59,394 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-06 18:55:59,394 INFO [Listener at localhost.localdomain/40081] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-06 18:55:59,396 WARN [Listener at localhost.localdomain/40081] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-06 18:55:59,397 WARN [Listener at localhost.localdomain/40081] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-06 18:55:59,397 WARN [Listener at localhost.localdomain/40081] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-06 18:55:59,423 WARN [Listener at localhost.localdomain/40081] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:55:59,426 INFO [Listener at localhost.localdomain/40081] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:55:59,430 INFO [Listener at localhost.localdomain/40081] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/java.io.tmpdir/Jetty_localhost_localdomain_35637_hdfs____jj77f6/webapp 2023-06-06 18:55:59,502 INFO [Listener at localhost.localdomain/40081] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:35637 2023-06-06 18:55:59,503 WARN [Listener at localhost.localdomain/40081] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-06 18:55:59,504 WARN [Listener at localhost.localdomain/40081] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-06 18:55:59,505 WARN [Listener at localhost.localdomain/40081] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-06 18:55:59,528 WARN [Listener at localhost.localdomain/34445] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:55:59,537 WARN [Listener at localhost.localdomain/34445] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-06 18:55:59,540 WARN [Listener at localhost.localdomain/34445] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:55:59,541 INFO [Listener at localhost.localdomain/34445] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:55:59,546 INFO [Listener at localhost.localdomain/34445] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/java.io.tmpdir/Jetty_localhost_41339_datanode____aql7hf/webapp 2023-06-06 18:55:59,617 INFO [Listener at localhost.localdomain/34445] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41339 2023-06-06 18:55:59,624 WARN [Listener at localhost.localdomain/42889] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:55:59,633 WARN [Listener at localhost.localdomain/42889] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-06 18:55:59,635 WARN [Listener at localhost.localdomain/42889] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:55:59,636 INFO [Listener at localhost.localdomain/42889] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:55:59,641 INFO [Listener at localhost.localdomain/42889] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/java.io.tmpdir/Jetty_localhost_33967_datanode____ynk20x/webapp 2023-06-06 18:55:59,677 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x12d6758ef0bcf957: Processing first storage report for DS-bf122d3c-8e35-4781-8db0-3c94bbd680b2 from datanode 69210f3d-4856-434b-90cf-62ab5a4837ca 2023-06-06 18:55:59,677 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x12d6758ef0bcf957: from storage DS-bf122d3c-8e35-4781-8db0-3c94bbd680b2 node DatanodeRegistration(127.0.0.1:40821, datanodeUuid=69210f3d-4856-434b-90cf-62ab5a4837ca, infoPort=38869, infoSecurePort=0, ipcPort=42889, storageInfo=lv=-57;cid=testClusterID;nsid=1735537510;c=1686077759399), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:55:59,678 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x12d6758ef0bcf957: Processing first storage report for DS-e0213c33-a0db-4b13-b2c9-70c3fb4479b1 from datanode 69210f3d-4856-434b-90cf-62ab5a4837ca 2023-06-06 18:55:59,678 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x12d6758ef0bcf957: from storage DS-e0213c33-a0db-4b13-b2c9-70c3fb4479b1 node DatanodeRegistration(127.0.0.1:40821, datanodeUuid=69210f3d-4856-434b-90cf-62ab5a4837ca, infoPort=38869, infoSecurePort=0, ipcPort=42889, storageInfo=lv=-57;cid=testClusterID;nsid=1735537510;c=1686077759399), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:55:59,720 INFO [Listener at localhost.localdomain/42889] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33967 2023-06-06 18:55:59,727 WARN [Listener at localhost.localdomain/43035] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:55:59,777 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7213588ef8cd80b5: Processing first storage report for DS-cc51ac99-6f43-4a35-8a2d-8b28b506585e from datanode b17c6be7-4289-4c69-85fa-c661ddfcc6c5 2023-06-06 18:55:59,777 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7213588ef8cd80b5: from storage DS-cc51ac99-6f43-4a35-8a2d-8b28b506585e node DatanodeRegistration(127.0.0.1:36005, datanodeUuid=b17c6be7-4289-4c69-85fa-c661ddfcc6c5, infoPort=35721, infoSecurePort=0, ipcPort=43035, storageInfo=lv=-57;cid=testClusterID;nsid=1735537510;c=1686077759399), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:55:59,777 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7213588ef8cd80b5: Processing first storage report for DS-977b2796-de70-4b8f-b223-0ae71ff66d97 from datanode b17c6be7-4289-4c69-85fa-c661ddfcc6c5 2023-06-06 18:55:59,777 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7213588ef8cd80b5: from storage DS-977b2796-de70-4b8f-b223-0ae71ff66d97 node DatanodeRegistration(127.0.0.1:36005, datanodeUuid=b17c6be7-4289-4c69-85fa-c661ddfcc6c5, infoPort=35721, infoSecurePort=0, ipcPort=43035, storageInfo=lv=-57;cid=testClusterID;nsid=1735537510;c=1686077759399), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:55:59,836 DEBUG [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e 2023-06-06 18:55:59,840 INFO [Listener at localhost.localdomain/43035] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/cluster_319be9c4-03b4-54cb-6941-a69454e072a8/zookeeper_0, clientPort=52238, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/cluster_319be9c4-03b4-54cb-6941-a69454e072a8/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/cluster_319be9c4-03b4-54cb-6941-a69454e072a8/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-06 18:55:59,842 INFO [Listener at localhost.localdomain/43035] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=52238 2023-06-06 18:55:59,842 INFO [Listener at localhost.localdomain/43035] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:55:59,843 INFO [Listener at localhost.localdomain/43035] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:55:59,857 INFO [Listener at localhost.localdomain/43035] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9 with version=8 2023-06-06 18:55:59,857 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/hbase-staging 2023-06-06 18:55:59,858 INFO [Listener at localhost.localdomain/43035] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-06-06 18:55:59,859 INFO [Listener at localhost.localdomain/43035] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:55:59,859 INFO [Listener at localhost.localdomain/43035] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-06 18:55:59,859 INFO [Listener at localhost.localdomain/43035] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-06 18:55:59,859 INFO [Listener at localhost.localdomain/43035] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:55:59,859 INFO [Listener at localhost.localdomain/43035] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-06 18:55:59,859 INFO [Listener at localhost.localdomain/43035] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-06 18:55:59,860 INFO [Listener at localhost.localdomain/43035] ipc.NettyRpcServer(120): Bind to /148.251.75.209:46631 2023-06-06 18:55:59,861 INFO [Listener at localhost.localdomain/43035] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:55:59,862 INFO [Listener at localhost.localdomain/43035] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:55:59,863 INFO [Listener at localhost.localdomain/43035] zookeeper.RecoverableZooKeeper(93): Process identifier=master:46631 connecting to ZooKeeper ensemble=127.0.0.1:52238 2023-06-06 18:55:59,868 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:466310x0, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-06 18:55:59,869 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:46631-0x101c1c6f98a0000 connected 2023-06-06 18:55:59,881 DEBUG [Listener at localhost.localdomain/43035] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-06 18:55:59,881 DEBUG [Listener at localhost.localdomain/43035] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:55:59,882 DEBUG [Listener at localhost.localdomain/43035] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-06 18:55:59,884 DEBUG [Listener at localhost.localdomain/43035] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46631 2023-06-06 18:55:59,885 DEBUG [Listener at localhost.localdomain/43035] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46631 2023-06-06 18:55:59,885 DEBUG [Listener at localhost.localdomain/43035] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46631 2023-06-06 18:55:59,885 DEBUG [Listener at localhost.localdomain/43035] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46631 2023-06-06 18:55:59,885 DEBUG [Listener at localhost.localdomain/43035] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46631 2023-06-06 18:55:59,885 INFO [Listener at localhost.localdomain/43035] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9, hbase.cluster.distributed=false 2023-06-06 18:55:59,897 INFO [Listener at localhost.localdomain/43035] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-06-06 18:55:59,897 INFO [Listener at localhost.localdomain/43035] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:55:59,897 INFO [Listener at localhost.localdomain/43035] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-06 18:55:59,897 INFO [Listener at localhost.localdomain/43035] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-06 18:55:59,897 INFO [Listener at localhost.localdomain/43035] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:55:59,898 INFO [Listener at localhost.localdomain/43035] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-06 18:55:59,898 INFO [Listener at localhost.localdomain/43035] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-06 18:55:59,899 INFO [Listener at localhost.localdomain/43035] ipc.NettyRpcServer(120): Bind to /148.251.75.209:36267 2023-06-06 18:55:59,899 INFO [Listener at localhost.localdomain/43035] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-06 18:55:59,900 DEBUG [Listener at localhost.localdomain/43035] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-06 18:55:59,901 INFO [Listener at localhost.localdomain/43035] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:55:59,902 INFO [Listener at localhost.localdomain/43035] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:55:59,903 INFO [Listener at localhost.localdomain/43035] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36267 connecting to ZooKeeper ensemble=127.0.0.1:52238 2023-06-06 18:55:59,905 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:362670x0, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-06 18:55:59,906 DEBUG [Listener at localhost.localdomain/43035] zookeeper.ZKUtil(164): regionserver:362670x0, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-06 18:55:59,907 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36267-0x101c1c6f98a0001 connected 2023-06-06 18:55:59,907 DEBUG [Listener at localhost.localdomain/43035] zookeeper.ZKUtil(164): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:55:59,908 DEBUG [Listener at localhost.localdomain/43035] zookeeper.ZKUtil(164): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-06 18:55:59,908 DEBUG [Listener at localhost.localdomain/43035] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36267 2023-06-06 18:55:59,908 DEBUG [Listener at localhost.localdomain/43035] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36267 2023-06-06 18:55:59,909 DEBUG [Listener at localhost.localdomain/43035] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36267 2023-06-06 18:55:59,909 DEBUG [Listener at localhost.localdomain/43035] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36267 2023-06-06 18:55:59,909 DEBUG [Listener at localhost.localdomain/43035] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36267 2023-06-06 18:55:59,910 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,46631,1686077759858 2023-06-06 18:55:59,922 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-06 18:55:59,922 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,46631,1686077759858 2023-06-06 18:55:59,926 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-06 18:55:59,926 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-06 18:55:59,927 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:55:59,927 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-06 18:55:59,928 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-06 18:55:59,928 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,46631,1686077759858 from backup master directory 2023-06-06 18:55:59,930 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,46631,1686077759858 2023-06-06 18:55:59,930 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-06 18:55:59,930 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-06 18:55:59,930 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,46631,1686077759858 2023-06-06 18:55:59,948 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/hbase.id with ID: 186bcc67-0545-47ab-8008-c8758d6cb5ba 2023-06-06 18:55:59,959 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:55:59,961 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:55:59,968 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x5737fd83 to 127.0.0.1:52238 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-06 18:55:59,976 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@481346d6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-06 18:55:59,977 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-06 18:55:59,977 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-06 18:55:59,977 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-06 18:55:59,979 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/MasterData/data/master/store-tmp 2023-06-06 18:55:59,988 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:55:59,988 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-06 18:55:59,988 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:55:59,988 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:55:59,988 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-06 18:55:59,988 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:55:59,988 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:55:59,988 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-06 18:55:59,989 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/MasterData/WALs/jenkins-hbase20.apache.org,46631,1686077759858 2023-06-06 18:55:59,993 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C46631%2C1686077759858, suffix=, logDir=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/MasterData/WALs/jenkins-hbase20.apache.org,46631,1686077759858, archiveDir=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/MasterData/oldWALs, maxLogs=10 2023-06-06 18:56:00,000 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/MasterData/WALs/jenkins-hbase20.apache.org,46631,1686077759858/jenkins-hbase20.apache.org%2C46631%2C1686077759858.1686077759994 2023-06-06 18:56:00,001 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40821,DS-bf122d3c-8e35-4781-8db0-3c94bbd680b2,DISK], DatanodeInfoWithStorage[127.0.0.1:36005,DS-cc51ac99-6f43-4a35-8a2d-8b28b506585e,DISK]] 2023-06-06 18:56:00,001 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-06 18:56:00,001 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:56:00,001 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:56:00,001 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:56:00,003 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:56:00,004 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-06 18:56:00,005 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-06 18:56:00,005 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:56:00,006 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:56:00,006 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:56:00,009 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:56:00,012 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-06 18:56:00,013 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=736625, jitterRate=-0.0633334219455719}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-06 18:56:00,013 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-06 18:56:00,013 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-06 18:56:00,014 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-06 18:56:00,014 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-06 18:56:00,014 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-06 18:56:00,015 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-06 18:56:00,015 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-06 18:56:00,015 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-06 18:56:00,018 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-06 18:56:00,019 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-06 18:56:00,028 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-06 18:56:00,029 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-06 18:56:00,029 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-06 18:56:00,029 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-06 18:56:00,030 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-06 18:56:00,031 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:56:00,032 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-06 18:56:00,032 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-06 18:56:00,033 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-06 18:56:00,034 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-06 18:56:00,034 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-06 18:56:00,034 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:56:00,034 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,46631,1686077759858, sessionid=0x101c1c6f98a0000, setting cluster-up flag (Was=false) 2023-06-06 18:56:00,037 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:56:00,039 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-06 18:56:00,040 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,46631,1686077759858 2023-06-06 18:56:00,042 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:56:00,044 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-06 18:56:00,045 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,46631,1686077759858 2023-06-06 18:56:00,045 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/.hbase-snapshot/.tmp 2023-06-06 18:56:00,048 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-06 18:56:00,048 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-06 18:56:00,048 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-06 18:56:00,048 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-06 18:56:00,048 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-06 18:56:00,048 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-06-06 18:56:00,048 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:56:00,048 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-06 18:56:00,049 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:56:00,053 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1686077790053 2023-06-06 18:56:00,054 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-06 18:56:00,054 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-06 18:56:00,054 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-06 18:56:00,054 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-06 18:56:00,054 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-06 18:56:00,054 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-06 18:56:00,057 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-06 18:56:00,057 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-06 18:56:00,058 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-06 18:56:00,058 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-06 18:56:00,058 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-06 18:56:00,058 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-06 18:56:00,058 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-06 18:56:00,058 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-06 18:56:00,058 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686077760058,5,FailOnTimeoutGroup] 2023-06-06 18:56:00,059 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686077760059,5,FailOnTimeoutGroup] 2023-06-06 18:56:00,059 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-06 18:56:00,059 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-06 18:56:00,059 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-06 18:56:00,059 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-06 18:56:00,059 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-06 18:56:00,072 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-06 18:56:00,072 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-06 18:56:00,072 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9 2023-06-06 18:56:00,081 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-06 18:56:00,086 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:56:00,087 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-06 18:56:00,088 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/1588230740/info 2023-06-06 18:56:00,088 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-06 18:56:00,089 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:56:00,089 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-06 18:56:00,090 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/1588230740/rep_barrier 2023-06-06 18:56:00,091 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-06 18:56:00,091 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:56:00,092 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-06 18:56:00,093 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/1588230740/table 2023-06-06 18:56:00,094 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-06 18:56:00,095 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:56:00,095 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/1588230740 2023-06-06 18:56:00,096 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/1588230740 2023-06-06 18:56:00,100 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-06 18:56:00,102 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-06 18:56:00,103 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-06 18:56:00,104 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=802566, jitterRate=0.02051667869091034}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-06 18:56:00,104 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-06 18:56:00,104 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-06 18:56:00,104 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-06 18:56:00,104 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-06 18:56:00,104 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-06 18:56:00,104 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-06 18:56:00,104 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-06 18:56:00,104 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-06 18:56:00,105 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-06 18:56:00,105 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-06 18:56:00,105 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-06 18:56:00,107 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-06 18:56:00,109 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-06 18:56:00,111 INFO [RS:0;jenkins-hbase20:36267] regionserver.HRegionServer(951): ClusterId : 186bcc67-0545-47ab-8008-c8758d6cb5ba 2023-06-06 18:56:00,112 DEBUG [RS:0;jenkins-hbase20:36267] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-06 18:56:00,114 DEBUG [RS:0;jenkins-hbase20:36267] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-06 18:56:00,114 DEBUG [RS:0;jenkins-hbase20:36267] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-06 18:56:00,115 DEBUG [RS:0;jenkins-hbase20:36267] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-06 18:56:00,116 DEBUG [RS:0;jenkins-hbase20:36267] zookeeper.ReadOnlyZKClient(139): Connect 0x4b96d26b to 127.0.0.1:52238 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-06 18:56:00,119 DEBUG [RS:0;jenkins-hbase20:36267] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@45513569, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-06 18:56:00,119 DEBUG [RS:0;jenkins-hbase20:36267] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@416eb91b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-06 18:56:00,129 DEBUG [RS:0;jenkins-hbase20:36267] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:36267 2023-06-06 18:56:00,129 INFO [RS:0;jenkins-hbase20:36267] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-06 18:56:00,129 INFO [RS:0;jenkins-hbase20:36267] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-06 18:56:00,129 DEBUG [RS:0;jenkins-hbase20:36267] regionserver.HRegionServer(1022): About to register with Master. 2023-06-06 18:56:00,130 INFO [RS:0;jenkins-hbase20:36267] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,46631,1686077759858 with isa=jenkins-hbase20.apache.org/148.251.75.209:36267, startcode=1686077759897 2023-06-06 18:56:00,130 DEBUG [RS:0;jenkins-hbase20:36267] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-06 18:56:00,134 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:40625, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-06-06 18:56:00,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:00,135 DEBUG [RS:0;jenkins-hbase20:36267] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9 2023-06-06 18:56:00,136 DEBUG [RS:0;jenkins-hbase20:36267] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:34445 2023-06-06 18:56:00,136 DEBUG [RS:0;jenkins-hbase20:36267] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-06 18:56:00,138 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-06 18:56:00,139 DEBUG [RS:0;jenkins-hbase20:36267] zookeeper.ZKUtil(162): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:00,139 WARN [RS:0;jenkins-hbase20:36267] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-06 18:56:00,139 INFO [RS:0;jenkins-hbase20:36267] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-06 18:56:00,139 DEBUG [RS:0;jenkins-hbase20:36267] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/WALs/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:00,139 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,36267,1686077759897] 2023-06-06 18:56:00,145 DEBUG [RS:0;jenkins-hbase20:36267] zookeeper.ZKUtil(162): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:00,145 DEBUG [RS:0;jenkins-hbase20:36267] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-06 18:56:00,146 INFO [RS:0;jenkins-hbase20:36267] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-06 18:56:00,147 INFO [RS:0;jenkins-hbase20:36267] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-06 18:56:00,147 INFO [RS:0;jenkins-hbase20:36267] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-06 18:56:00,147 INFO [RS:0;jenkins-hbase20:36267] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-06 18:56:00,147 INFO [RS:0;jenkins-hbase20:36267] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-06 18:56:00,149 INFO [RS:0;jenkins-hbase20:36267] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-06 18:56:00,149 DEBUG [RS:0;jenkins-hbase20:36267] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:56:00,149 DEBUG [RS:0;jenkins-hbase20:36267] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:56:00,150 DEBUG [RS:0;jenkins-hbase20:36267] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:56:00,150 DEBUG [RS:0;jenkins-hbase20:36267] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:56:00,150 DEBUG [RS:0;jenkins-hbase20:36267] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:56:00,150 DEBUG [RS:0;jenkins-hbase20:36267] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-06 18:56:00,150 DEBUG [RS:0;jenkins-hbase20:36267] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:56:00,150 DEBUG [RS:0;jenkins-hbase20:36267] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:56:00,150 DEBUG [RS:0;jenkins-hbase20:36267] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:56:00,150 DEBUG [RS:0;jenkins-hbase20:36267] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:56:00,151 INFO [RS:0;jenkins-hbase20:36267] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-06 18:56:00,151 INFO [RS:0;jenkins-hbase20:36267] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-06 18:56:00,151 INFO [RS:0;jenkins-hbase20:36267] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-06 18:56:00,162 INFO [RS:0;jenkins-hbase20:36267] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-06 18:56:00,162 INFO [RS:0;jenkins-hbase20:36267] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36267,1686077759897-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-06 18:56:00,171 INFO [RS:0;jenkins-hbase20:36267] regionserver.Replication(203): jenkins-hbase20.apache.org,36267,1686077759897 started 2023-06-06 18:56:00,171 INFO [RS:0;jenkins-hbase20:36267] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,36267,1686077759897, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:36267, sessionid=0x101c1c6f98a0001 2023-06-06 18:56:00,171 DEBUG [RS:0;jenkins-hbase20:36267] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-06 18:56:00,171 DEBUG [RS:0;jenkins-hbase20:36267] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:00,171 DEBUG [RS:0;jenkins-hbase20:36267] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,36267,1686077759897' 2023-06-06 18:56:00,171 DEBUG [RS:0;jenkins-hbase20:36267] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-06 18:56:00,172 DEBUG [RS:0;jenkins-hbase20:36267] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-06 18:56:00,172 DEBUG [RS:0;jenkins-hbase20:36267] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-06 18:56:00,172 DEBUG [RS:0;jenkins-hbase20:36267] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-06 18:56:00,172 DEBUG [RS:0;jenkins-hbase20:36267] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:00,172 DEBUG [RS:0;jenkins-hbase20:36267] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,36267,1686077759897' 2023-06-06 18:56:00,172 DEBUG [RS:0;jenkins-hbase20:36267] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-06 18:56:00,173 DEBUG [RS:0;jenkins-hbase20:36267] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-06 18:56:00,173 DEBUG [RS:0;jenkins-hbase20:36267] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-06 18:56:00,173 INFO [RS:0;jenkins-hbase20:36267] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-06 18:56:00,173 INFO [RS:0;jenkins-hbase20:36267] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-06 18:56:00,259 DEBUG [jenkins-hbase20:46631] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-06 18:56:00,260 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,36267,1686077759897, state=OPENING 2023-06-06 18:56:00,261 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-06 18:56:00,262 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:56:00,263 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,36267,1686077759897}] 2023-06-06 18:56:00,263 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-06 18:56:00,275 INFO [RS:0;jenkins-hbase20:36267] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C36267%2C1686077759897, suffix=, logDir=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/WALs/jenkins-hbase20.apache.org,36267,1686077759897, archiveDir=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/oldWALs, maxLogs=32 2023-06-06 18:56:00,285 INFO [RS:0;jenkins-hbase20:36267] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/WALs/jenkins-hbase20.apache.org,36267,1686077759897/jenkins-hbase20.apache.org%2C36267%2C1686077759897.1686077760276 2023-06-06 18:56:00,285 DEBUG [RS:0;jenkins-hbase20:36267] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36005,DS-cc51ac99-6f43-4a35-8a2d-8b28b506585e,DISK], DatanodeInfoWithStorage[127.0.0.1:40821,DS-bf122d3c-8e35-4781-8db0-3c94bbd680b2,DISK]] 2023-06-06 18:56:00,417 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:00,418 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-06 18:56:00,419 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:38386, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-06 18:56:00,423 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-06 18:56:00,423 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-06 18:56:00,424 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C36267%2C1686077759897.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/WALs/jenkins-hbase20.apache.org,36267,1686077759897, archiveDir=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/oldWALs, maxLogs=32 2023-06-06 18:56:00,434 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/WALs/jenkins-hbase20.apache.org,36267,1686077759897/jenkins-hbase20.apache.org%2C36267%2C1686077759897.meta.1686077760425.meta 2023-06-06 18:56:00,434 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40821,DS-bf122d3c-8e35-4781-8db0-3c94bbd680b2,DISK], DatanodeInfoWithStorage[127.0.0.1:36005,DS-cc51ac99-6f43-4a35-8a2d-8b28b506585e,DISK]] 2023-06-06 18:56:00,434 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-06 18:56:00,435 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-06 18:56:00,435 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-06 18:56:00,435 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-06 18:56:00,435 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-06 18:56:00,435 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:56:00,435 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-06 18:56:00,435 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-06 18:56:00,437 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-06 18:56:00,439 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/1588230740/info 2023-06-06 18:56:00,439 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/1588230740/info 2023-06-06 18:56:00,439 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-06 18:56:00,440 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:56:00,440 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-06 18:56:00,442 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/1588230740/rep_barrier 2023-06-06 18:56:00,442 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/1588230740/rep_barrier 2023-06-06 18:56:00,442 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-06 18:56:00,443 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:56:00,443 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-06 18:56:00,445 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/1588230740/table 2023-06-06 18:56:00,445 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/1588230740/table 2023-06-06 18:56:00,445 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-06 18:56:00,446 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:56:00,447 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/1588230740 2023-06-06 18:56:00,449 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/1588230740 2023-06-06 18:56:00,451 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-06 18:56:00,453 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-06 18:56:00,454 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=725143, jitterRate=-0.07793410122394562}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-06 18:56:00,455 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-06 18:56:00,457 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1686077760417 2023-06-06 18:56:00,461 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-06 18:56:00,462 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-06 18:56:00,462 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,36267,1686077759897, state=OPEN 2023-06-06 18:56:00,463 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-06 18:56:00,464 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-06 18:56:00,465 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-06 18:56:00,466 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,36267,1686077759897 in 201 msec 2023-06-06 18:56:00,467 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-06 18:56:00,467 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 361 msec 2023-06-06 18:56:00,469 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 422 msec 2023-06-06 18:56:00,469 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1686077760469, completionTime=-1 2023-06-06 18:56:00,469 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-06 18:56:00,470 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-06 18:56:00,473 DEBUG [hconnection-0x18b199b2-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-06 18:56:00,476 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:38402, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-06 18:56:00,477 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-06 18:56:00,477 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1686077820477 2023-06-06 18:56:00,477 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1686077880477 2023-06-06 18:56:00,477 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 7 msec 2023-06-06 18:56:00,484 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46631,1686077759858-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-06 18:56:00,484 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46631,1686077759858-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-06 18:56:00,484 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46631,1686077759858-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-06 18:56:00,484 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:46631, period=300000, unit=MILLISECONDS is enabled. 2023-06-06 18:56:00,484 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-06 18:56:00,484 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-06 18:56:00,484 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-06 18:56:00,485 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-06 18:56:00,486 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-06 18:56:00,489 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-06 18:56:00,490 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-06 18:56:00,494 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/.tmp/data/hbase/namespace/63ef23b2f59301805ed9a536094f0e88 2023-06-06 18:56:00,494 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/.tmp/data/hbase/namespace/63ef23b2f59301805ed9a536094f0e88 empty. 2023-06-06 18:56:00,495 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/.tmp/data/hbase/namespace/63ef23b2f59301805ed9a536094f0e88 2023-06-06 18:56:00,495 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-06 18:56:00,512 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-06 18:56:00,514 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 63ef23b2f59301805ed9a536094f0e88, NAME => 'hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/.tmp 2023-06-06 18:56:00,529 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:56:00,529 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 63ef23b2f59301805ed9a536094f0e88, disabling compactions & flushes 2023-06-06 18:56:00,529 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88. 2023-06-06 18:56:00,529 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88. 2023-06-06 18:56:00,529 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88. after waiting 0 ms 2023-06-06 18:56:00,529 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88. 2023-06-06 18:56:00,529 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88. 2023-06-06 18:56:00,529 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 63ef23b2f59301805ed9a536094f0e88: 2023-06-06 18:56:00,532 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-06 18:56:00,534 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686077760533"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686077760533"}]},"ts":"1686077760533"} 2023-06-06 18:56:00,536 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-06 18:56:00,538 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-06 18:56:00,538 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686077760538"}]},"ts":"1686077760538"} 2023-06-06 18:56:00,540 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-06 18:56:00,544 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=63ef23b2f59301805ed9a536094f0e88, ASSIGN}] 2023-06-06 18:56:00,546 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=63ef23b2f59301805ed9a536094f0e88, ASSIGN 2023-06-06 18:56:00,547 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=63ef23b2f59301805ed9a536094f0e88, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,36267,1686077759897; forceNewPlan=false, retain=false 2023-06-06 18:56:00,698 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=63ef23b2f59301805ed9a536094f0e88, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:00,699 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686077760698"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686077760698"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686077760698"}]},"ts":"1686077760698"} 2023-06-06 18:56:00,701 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 63ef23b2f59301805ed9a536094f0e88, server=jenkins-hbase20.apache.org,36267,1686077759897}] 2023-06-06 18:56:00,857 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88. 2023-06-06 18:56:00,857 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 63ef23b2f59301805ed9a536094f0e88, NAME => 'hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88.', STARTKEY => '', ENDKEY => ''} 2023-06-06 18:56:00,858 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 63ef23b2f59301805ed9a536094f0e88 2023-06-06 18:56:00,858 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:56:00,858 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 63ef23b2f59301805ed9a536094f0e88 2023-06-06 18:56:00,858 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 63ef23b2f59301805ed9a536094f0e88 2023-06-06 18:56:00,859 INFO [StoreOpener-63ef23b2f59301805ed9a536094f0e88-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 63ef23b2f59301805ed9a536094f0e88 2023-06-06 18:56:00,861 DEBUG [StoreOpener-63ef23b2f59301805ed9a536094f0e88-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/namespace/63ef23b2f59301805ed9a536094f0e88/info 2023-06-06 18:56:00,861 DEBUG [StoreOpener-63ef23b2f59301805ed9a536094f0e88-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/namespace/63ef23b2f59301805ed9a536094f0e88/info 2023-06-06 18:56:00,861 INFO [StoreOpener-63ef23b2f59301805ed9a536094f0e88-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 63ef23b2f59301805ed9a536094f0e88 columnFamilyName info 2023-06-06 18:56:00,862 INFO [StoreOpener-63ef23b2f59301805ed9a536094f0e88-1] regionserver.HStore(310): Store=63ef23b2f59301805ed9a536094f0e88/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:56:00,862 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/namespace/63ef23b2f59301805ed9a536094f0e88 2023-06-06 18:56:00,863 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/namespace/63ef23b2f59301805ed9a536094f0e88 2023-06-06 18:56:00,866 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 63ef23b2f59301805ed9a536094f0e88 2023-06-06 18:56:00,869 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/namespace/63ef23b2f59301805ed9a536094f0e88/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-06 18:56:00,869 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 63ef23b2f59301805ed9a536094f0e88; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=704873, jitterRate=-0.10370869934558868}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-06 18:56:00,869 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 63ef23b2f59301805ed9a536094f0e88: 2023-06-06 18:56:00,872 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88., pid=6, masterSystemTime=1686077760853 2023-06-06 18:56:00,876 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=63ef23b2f59301805ed9a536094f0e88, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:00,876 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686077760876"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686077760876"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686077760876"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686077760876"}]},"ts":"1686077760876"} 2023-06-06 18:56:00,876 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88. 2023-06-06 18:56:00,876 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88. 2023-06-06 18:56:00,881 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-06 18:56:00,881 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 63ef23b2f59301805ed9a536094f0e88, server=jenkins-hbase20.apache.org,36267,1686077759897 in 177 msec 2023-06-06 18:56:00,883 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-06 18:56:00,883 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=63ef23b2f59301805ed9a536094f0e88, ASSIGN in 337 msec 2023-06-06 18:56:00,884 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-06 18:56:00,884 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686077760884"}]},"ts":"1686077760884"} 2023-06-06 18:56:00,886 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-06 18:56:00,888 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-06 18:56:00,889 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-06 18:56:00,889 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-06 18:56:00,889 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:56:00,891 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 405 msec 2023-06-06 18:56:00,893 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-06 18:56:00,918 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-06 18:56:00,925 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 31 msec 2023-06-06 18:56:00,936 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-06 18:56:00,945 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-06 18:56:00,949 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-06-06 18:56:00,967 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-06 18:56:00,969 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-06 18:56:00,969 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.039sec 2023-06-06 18:56:00,969 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-06 18:56:00,969 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-06 18:56:00,969 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-06 18:56:00,969 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46631,1686077759858-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-06 18:56:00,969 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46631,1686077759858-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-06 18:56:00,971 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-06 18:56:01,012 DEBUG [Listener at localhost.localdomain/43035] zookeeper.ReadOnlyZKClient(139): Connect 0x5cd5f104 to 127.0.0.1:52238 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-06 18:56:01,020 DEBUG [Listener at localhost.localdomain/43035] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@53b8bc3d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-06 18:56:01,022 DEBUG [hconnection-0x28afdce-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-06 18:56:01,025 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:54928, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-06 18:56:01,027 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,46631,1686077759858 2023-06-06 18:56:01,027 INFO [Listener at localhost.localdomain/43035] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:56:01,031 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-06 18:56:01,031 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:56:01,031 INFO [Listener at localhost.localdomain/43035] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-06 18:56:01,034 DEBUG [Listener at localhost.localdomain/43035] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-06 18:56:01,038 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:39040, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-06 18:56:01,039 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-06 18:56:01,040 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-06 18:56:01,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-06 18:56:01,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:01,044 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-06-06 18:56:01,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testCompactionRecordDoesntBlockRolling" procId is: 9 2023-06-06 18:56:01,045 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-06 18:56:01,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-06 18:56:01,050 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc 2023-06-06 18:56:01,051 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc empty. 2023-06-06 18:56:01,051 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc 2023-06-06 18:56:01,051 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testCompactionRecordDoesntBlockRolling regions 2023-06-06 18:56:01,066 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/.tabledesc/.tableinfo.0000000001 2023-06-06 18:56:01,068 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6705c742d62f3c213cab19deb06164dc, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/.tmp 2023-06-06 18:56:01,083 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:56:01,084 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1604): Closing 6705c742d62f3c213cab19deb06164dc, disabling compactions & flushes 2023-06-06 18:56:01,084 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. 2023-06-06 18:56:01,084 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. 2023-06-06 18:56:01,084 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. after waiting 0 ms 2023-06-06 18:56:01,084 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. 2023-06-06 18:56:01,084 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. 2023-06-06 18:56:01,084 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1558): Region close journal for 6705c742d62f3c213cab19deb06164dc: 2023-06-06 18:56:01,086 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ADD_TO_META 2023-06-06 18:56:01,087 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1686077761087"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686077761087"}]},"ts":"1686077761087"} 2023-06-06 18:56:01,089 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-06 18:56:01,090 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-06 18:56:01,090 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686077761090"}]},"ts":"1686077761090"} 2023-06-06 18:56:01,092 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLING in hbase:meta 2023-06-06 18:56:01,096 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=6705c742d62f3c213cab19deb06164dc, ASSIGN}] 2023-06-06 18:56:01,098 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=6705c742d62f3c213cab19deb06164dc, ASSIGN 2023-06-06 18:56:01,099 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=6705c742d62f3c213cab19deb06164dc, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,36267,1686077759897; forceNewPlan=false, retain=false 2023-06-06 18:56:01,250 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=6705c742d62f3c213cab19deb06164dc, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:01,250 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1686077761250"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686077761250"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686077761250"}]},"ts":"1686077761250"} 2023-06-06 18:56:01,253 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 6705c742d62f3c213cab19deb06164dc, server=jenkins-hbase20.apache.org,36267,1686077759897}] 2023-06-06 18:56:01,409 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. 2023-06-06 18:56:01,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6705c742d62f3c213cab19deb06164dc, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc.', STARTKEY => '', ENDKEY => ''} 2023-06-06 18:56:01,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testCompactionRecordDoesntBlockRolling 6705c742d62f3c213cab19deb06164dc 2023-06-06 18:56:01,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:56:01,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 6705c742d62f3c213cab19deb06164dc 2023-06-06 18:56:01,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 6705c742d62f3c213cab19deb06164dc 2023-06-06 18:56:01,411 INFO [StoreOpener-6705c742d62f3c213cab19deb06164dc-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 6705c742d62f3c213cab19deb06164dc 2023-06-06 18:56:01,412 DEBUG [StoreOpener-6705c742d62f3c213cab19deb06164dc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info 2023-06-06 18:56:01,412 DEBUG [StoreOpener-6705c742d62f3c213cab19deb06164dc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info 2023-06-06 18:56:01,413 INFO [StoreOpener-6705c742d62f3c213cab19deb06164dc-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6705c742d62f3c213cab19deb06164dc columnFamilyName info 2023-06-06 18:56:01,413 INFO [StoreOpener-6705c742d62f3c213cab19deb06164dc-1] regionserver.HStore(310): Store=6705c742d62f3c213cab19deb06164dc/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:56:01,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc 2023-06-06 18:56:01,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc 2023-06-06 18:56:01,417 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 6705c742d62f3c213cab19deb06164dc 2023-06-06 18:56:01,420 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-06 18:56:01,420 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 6705c742d62f3c213cab19deb06164dc; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=769260, jitterRate=-0.021836504340171814}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-06 18:56:01,420 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 6705c742d62f3c213cab19deb06164dc: 2023-06-06 18:56:01,421 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc., pid=11, masterSystemTime=1686077761405 2023-06-06 18:56:01,423 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. 2023-06-06 18:56:01,423 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. 2023-06-06 18:56:01,424 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=6705c742d62f3c213cab19deb06164dc, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:01,424 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1686077761423"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686077761423"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686077761423"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686077761423"}]},"ts":"1686077761423"} 2023-06-06 18:56:01,428 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-06 18:56:01,428 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 6705c742d62f3c213cab19deb06164dc, server=jenkins-hbase20.apache.org,36267,1686077759897 in 173 msec 2023-06-06 18:56:01,430 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-06 18:56:01,430 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=6705c742d62f3c213cab19deb06164dc, ASSIGN in 332 msec 2023-06-06 18:56:01,431 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-06 18:56:01,431 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686077761431"}]},"ts":"1686077761431"} 2023-06-06 18:56:01,432 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLED in hbase:meta 2023-06-06 18:56:01,435 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_POST_OPERATION 2023-06-06 18:56:01,437 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling in 395 msec 2023-06-06 18:56:04,064 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-06 18:56:06,146 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-06 18:56:06,147 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-06 18:56:06,147 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-06 18:56:11,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-06 18:56:11,047 INFO [Listener at localhost.localdomain/43035] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testCompactionRecordDoesntBlockRolling, procId: 9 completed 2023-06-06 18:56:11,050 DEBUG [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:11,050 DEBUG [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. 2023-06-06 18:56:11,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-06-06 18:56:11,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] procedure.ProcedureCoordinator(165): Submitting procedure hbase:namespace 2023-06-06 18:56:11,074 INFO [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'hbase:namespace' 2023-06-06 18:56:11,074 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-06 18:56:11,074 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'hbase:namespace' starting 'acquire' 2023-06-06 18:56:11,074 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'hbase:namespace', kicking off acquire phase on members. 2023-06-06 18:56:11,075 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-06-06 18:56:11,075 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-06 18:56:11,076 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-06 18:56:11,076 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:11,076 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-06 18:56:11,076 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-06 18:56:11,076 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:11,076 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-06 18:56:11,076 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/hbase:namespace 2023-06-06 18:56:11,077 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-06-06 18:56:11,077 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-06 18:56:11,077 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-06 18:56:11,078 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for hbase:namespace 2023-06-06 18:56:11,079 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:hbase:namespace 2023-06-06 18:56:11,080 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'hbase:namespace' with timeout 60000ms 2023-06-06 18:56:11,080 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-06 18:56:11,081 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'hbase:namespace' starting 'acquire' stage 2023-06-06 18:56:11,081 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-06 18:56:11,081 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-06 18:56:11,081 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88. 2023-06-06 18:56:11,082 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88. started... 2023-06-06 18:56:11,082 INFO [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 63ef23b2f59301805ed9a536094f0e88 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-06 18:56:11,095 INFO [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/namespace/63ef23b2f59301805ed9a536094f0e88/.tmp/info/2856d95ae0a54733aed3b53eb49613dc 2023-06-06 18:56:11,105 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/namespace/63ef23b2f59301805ed9a536094f0e88/.tmp/info/2856d95ae0a54733aed3b53eb49613dc as hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/namespace/63ef23b2f59301805ed9a536094f0e88/info/2856d95ae0a54733aed3b53eb49613dc 2023-06-06 18:56:11,113 INFO [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/namespace/63ef23b2f59301805ed9a536094f0e88/info/2856d95ae0a54733aed3b53eb49613dc, entries=2, sequenceid=6, filesize=4.8 K 2023-06-06 18:56:11,113 INFO [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 63ef23b2f59301805ed9a536094f0e88 in 31ms, sequenceid=6, compaction requested=false 2023-06-06 18:56:11,114 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 63ef23b2f59301805ed9a536094f0e88: 2023-06-06 18:56:11,114 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88. 2023-06-06 18:56:11,114 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-06 18:56:11,114 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-06 18:56:11,114 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:11,114 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'hbase:namespace' locally acquired 2023-06-06 18:56:11,114 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,36267,1686077759897' joining acquired barrier for procedure (hbase:namespace) in zk 2023-06-06 18:56:11,116 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-06 18:56:11,116 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:11,116 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:11,116 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-06 18:56:11,116 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-06 18:56:11,116 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace 2023-06-06 18:56:11,116 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'hbase:namespace' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-06 18:56:11,117 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-06 18:56:11,117 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-06 18:56:11,117 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-06 18:56:11,117 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:11,118 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-06 18:56:11,118 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,36267,1686077759897' joining acquired barrier for procedure 'hbase:namespace' on coordinator 2023-06-06 18:56:11,118 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'hbase:namespace' starting 'in-barrier' execution. 2023-06-06 18:56:11,118 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@4bf5c8b8[Count = 0] remaining members to acquire global barrier 2023-06-06 18:56:11,118 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-06 18:56:11,120 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-06-06 18:56:11,120 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-06 18:56:11,120 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-06 18:56:11,120 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'hbase:namespace' received 'reached' from coordinator. 2023-06-06 18:56:11,120 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'hbase:namespace' locally completed 2023-06-06 18:56:11,120 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'hbase:namespace' completed for member 'jenkins-hbase20.apache.org,36267,1686077759897' in zk 2023-06-06 18:56:11,120 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:11,120 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-06 18:56:11,121 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'hbase:namespace' has notified controller of completion 2023-06-06 18:56:11,121 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:11,121 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-06 18:56:11,121 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:11,121 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-06 18:56:11,122 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-06 18:56:11,121 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'hbase:namespace' completed. 2023-06-06 18:56:11,122 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-06 18:56:11,123 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-06 18:56:11,123 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-06 18:56:11,123 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:11,123 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-06 18:56:11,123 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-06 18:56:11,124 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:11,124 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'hbase:namespace' member 'jenkins-hbase20.apache.org,36267,1686077759897': 2023-06-06 18:56:11,124 INFO [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'hbase:namespace' execution completed 2023-06-06 18:56:11,124 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-06 18:56:11,124 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,36267,1686077759897' released barrier for procedure'hbase:namespace', counting down latch. Waiting for 0 more 2023-06-06 18:56:11,124 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-06 18:56:11,125 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:hbase:namespace 2023-06-06 18:56:11,125 INFO [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure hbase:namespaceincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-06 18:56:11,126 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-06-06 18:56:11,126 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-06-06 18:56:11,126 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/hbase:namespace 2023-06-06 18:56:11,126 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-06 18:56:11,126 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-06 18:56:11,126 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/hbase:namespace 2023-06-06 18:56:11,126 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-06 18:56:11,126 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-06-06 18:56:11,127 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-06 18:56:11,127 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-06 18:56:11,127 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:11,127 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-06 18:56:11,127 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-06 18:56:11,127 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-06-06 18:56:11,127 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-06 18:56:11,128 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-06 18:56:11,128 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:11,128 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:11,128 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-06 18:56:11,128 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-06 18:56:11,129 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:11,138 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:11,138 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-06 18:56:11,138 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-06 18:56:11,138 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-06 18:56:11,138 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-06 18:56:11,138 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-06 18:56:11,138 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-06 18:56:11,138 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:11,139 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-06-06 18:56:11,139 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-06-06 18:56:11,139 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-06-06 18:56:11,138 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-06 18:56:11,139 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-06 18:56:11,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'hbase:namespace' 2023-06-06 18:56:11,139 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-06 18:56:11,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-06 18:56:11,142 DEBUG [Listener at localhost.localdomain/43035] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : hbase:namespace'' to complete. (max 20000 ms per retry) 2023-06-06 18:56:11,143 DEBUG [Listener at localhost.localdomain/43035] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-06 18:56:21,143 DEBUG [Listener at localhost.localdomain/43035] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-06 18:56:21,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-06 18:56:21,166 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-06-06 18:56:21,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,168 INFO [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-06 18:56:21,168 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-06 18:56:21,169 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-06 18:56:21,169 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-06 18:56:21,169 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,169 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,170 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:21,170 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-06 18:56:21,170 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-06 18:56:21,170 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-06 18:56:21,171 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:21,171 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-06 18:56:21,171 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,171 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,171 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-06 18:56:21,171 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,171 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,172 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,172 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-06 18:56:21,172 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-06 18:56:21,172 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-06 18:56:21,173 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-06 18:56:21,173 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-06 18:56:21,173 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. 2023-06-06 18:56:21,173 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. started... 2023-06-06 18:56:21,173 INFO [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 6705c742d62f3c213cab19deb06164dc 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-06 18:56:21,187 INFO [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=5 (bloomFilter=true), to=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/.tmp/info/edf6381478414cf29c3d54928a806f64 2023-06-06 18:56:21,196 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/.tmp/info/edf6381478414cf29c3d54928a806f64 as hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info/edf6381478414cf29c3d54928a806f64 2023-06-06 18:56:21,207 INFO [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info/edf6381478414cf29c3d54928a806f64, entries=1, sequenceid=5, filesize=5.8 K 2023-06-06 18:56:21,208 INFO [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 6705c742d62f3c213cab19deb06164dc in 35ms, sequenceid=5, compaction requested=false 2023-06-06 18:56:21,209 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 6705c742d62f3c213cab19deb06164dc: 2023-06-06 18:56:21,209 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. 2023-06-06 18:56:21,209 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-06 18:56:21,210 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-06 18:56:21,210 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:21,210 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-06 18:56:21,210 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,36267,1686077759897' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-06 18:56:21,212 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:21,212 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,212 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:21,212 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-06 18:56:21,212 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-06 18:56:21,212 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,212 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-06 18:56:21,213 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-06 18:56:21,213 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-06 18:56:21,213 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,214 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:21,214 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-06 18:56:21,214 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,36267,1686077759897' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-06 18:56:21,214 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@4fca551e[Count = 0] remaining members to acquire global barrier 2023-06-06 18:56:21,214 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-06 18:56:21,214 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,215 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,215 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,215 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,216 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-06 18:56:21,216 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:21,216 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-06 18:56:21,216 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-06 18:56:21,216 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase20.apache.org,36267,1686077759897' in zk 2023-06-06 18:56:21,217 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:21,217 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-06 18:56:21,217 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:21,217 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-06 18:56:21,217 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-06 18:56:21,217 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-06 18:56:21,218 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-06 18:56:21,218 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-06 18:56:21,219 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-06 18:56:21,219 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,219 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:21,219 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-06 18:56:21,220 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,220 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:21,220 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase20.apache.org,36267,1686077759897': 2023-06-06 18:56:21,220 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,36267,1686077759897' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-06 18:56:21,220 INFO [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-06 18:56:21,220 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-06 18:56:21,221 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-06 18:56:21,221 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,221 INFO [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-06 18:56:21,226 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,226 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,226 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,226 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,226 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-06 18:56:21,226 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,226 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-06 18:56:21,226 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-06 18:56:21,226 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:21,226 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-06 18:56:21,226 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-06 18:56:21,226 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-06 18:56:21,226 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,227 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,227 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-06 18:56:21,227 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,227 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:21,227 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:21,228 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-06 18:56:21,228 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,228 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:21,233 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:21,233 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-06 18:56:21,233 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,233 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-06 18:56:21,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-06 18:56:21,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-06 18:56:21,233 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-06 18:56:21,233 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,234 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-06-06 18:56:21,234 DEBUG [Listener at localhost.localdomain/43035] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-06 18:56:21,234 DEBUG [Listener at localhost.localdomain/43035] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-06 18:56:21,233 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-06 18:56:21,234 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:21,233 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-06 18:56:21,235 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,235 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,235 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:21,235 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-06 18:56:21,235 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-06 18:56:31,235 DEBUG [Listener at localhost.localdomain/43035] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-06 18:56:31,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-06 18:56:31,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-06-06 18:56:31,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-06-06 18:56:31,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,254 INFO [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-06 18:56:31,254 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-06 18:56:31,254 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-06 18:56:31,254 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-06 18:56:31,255 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,255 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,256 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-06 18:56:31,256 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:31,256 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-06 18:56:31,256 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-06 18:56:31,256 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:31,256 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-06 18:56:31,256 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,257 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,257 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-06 18:56:31,257 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,257 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,257 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-06-06 18:56:31,257 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,257 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-06 18:56:31,257 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-06 18:56:31,258 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-06 18:56:31,258 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-06 18:56:31,258 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-06 18:56:31,258 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. 2023-06-06 18:56:31,258 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. started... 2023-06-06 18:56:31,258 INFO [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 6705c742d62f3c213cab19deb06164dc 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-06 18:56:31,267 INFO [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/.tmp/info/1afc4e72ee6f4976843108349410f832 2023-06-06 18:56:31,274 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/.tmp/info/1afc4e72ee6f4976843108349410f832 as hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info/1afc4e72ee6f4976843108349410f832 2023-06-06 18:56:31,283 INFO [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info/1afc4e72ee6f4976843108349410f832, entries=1, sequenceid=9, filesize=5.8 K 2023-06-06 18:56:31,284 INFO [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 6705c742d62f3c213cab19deb06164dc in 26ms, sequenceid=9, compaction requested=false 2023-06-06 18:56:31,284 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 6705c742d62f3c213cab19deb06164dc: 2023-06-06 18:56:31,284 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. 2023-06-06 18:56:31,284 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-06 18:56:31,284 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-06 18:56:31,284 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:31,284 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-06 18:56:31,284 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,36267,1686077759897' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-06 18:56:31,286 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:31,286 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,286 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:31,286 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-06 18:56:31,286 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-06 18:56:31,287 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,287 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-06 18:56:31,287 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-06 18:56:31,287 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-06 18:56:31,288 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,288 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:31,288 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-06 18:56:31,289 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,36267,1686077759897' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-06 18:56:31,289 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@ac8c283[Count = 0] remaining members to acquire global barrier 2023-06-06 18:56:31,289 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-06 18:56:31,289 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,289 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,289 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,290 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,290 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-06 18:56:31,290 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-06 18:56:31,290 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase20.apache.org,36267,1686077759897' in zk 2023-06-06 18:56:31,290 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:31,290 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-06 18:56:31,291 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-06 18:56:31,291 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:31,291 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-06 18:56:31,291 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:31,292 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-06 18:56:31,292 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-06 18:56:31,291 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-06 18:56:31,292 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-06 18:56:31,293 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-06 18:56:31,293 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,293 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:31,294 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-06 18:56:31,294 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,295 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:31,295 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase20.apache.org,36267,1686077759897': 2023-06-06 18:56:31,295 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,36267,1686077759897' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-06 18:56:31,295 INFO [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-06 18:56:31,295 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-06 18:56:31,295 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-06 18:56:31,295 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,296 INFO [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-06 18:56:31,304 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,304 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,304 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,304 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,304 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-06 18:56:31,304 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-06 18:56:31,304 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-06 18:56:31,304 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,304 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-06 18:56:31,305 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-06 18:56:31,305 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-06 18:56:31,305 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:31,305 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,305 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,305 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-06 18:56:31,306 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,306 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:31,306 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:31,306 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-06 18:56:31,307 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,307 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:31,312 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:31,312 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-06 18:56:31,312 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,312 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-06 18:56:31,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-06 18:56:31,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-06 18:56:31,312 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-06 18:56:31,312 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-06 18:56:31,312 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-06 18:56:31,312 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,313 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:31,313 DEBUG [Listener at localhost.localdomain/43035] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-06 18:56:31,314 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,314 DEBUG [Listener at localhost.localdomain/43035] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-06 18:56:31,314 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-06 18:56:31,314 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:31,314 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-06 18:56:31,314 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,314 DEBUG [Listener at localhost.localdomain/43035] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-06 18:56:41,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-06 18:56:41,335 INFO [Listener at localhost.localdomain/43035] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/WALs/jenkins-hbase20.apache.org,36267,1686077759897/jenkins-hbase20.apache.org%2C36267%2C1686077759897.1686077760276 with entries=13, filesize=6.44 KB; new WAL /user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/WALs/jenkins-hbase20.apache.org,36267,1686077759897/jenkins-hbase20.apache.org%2C36267%2C1686077759897.1686077801317 2023-06-06 18:56:41,335 DEBUG [Listener at localhost.localdomain/43035] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40821,DS-bf122d3c-8e35-4781-8db0-3c94bbd680b2,DISK], DatanodeInfoWithStorage[127.0.0.1:36005,DS-cc51ac99-6f43-4a35-8a2d-8b28b506585e,DISK]] 2023-06-06 18:56:41,335 DEBUG [Listener at localhost.localdomain/43035] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/WALs/jenkins-hbase20.apache.org,36267,1686077759897/jenkins-hbase20.apache.org%2C36267%2C1686077759897.1686077760276 is not closed yet, will try archiving it next time 2023-06-06 18:56:41,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-06-06 18:56:41,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-06-06 18:56:41,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,348 INFO [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-06 18:56:41,348 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-06 18:56:41,348 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-06 18:56:41,348 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-06 18:56:41,349 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,349 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,350 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-06 18:56:41,350 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:41,350 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-06 18:56:41,350 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-06 18:56:41,350 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,353 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:41,353 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-06 18:56:41,353 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,353 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-06 18:56:41,353 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,353 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,354 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-06-06 18:56:41,354 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,354 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-06 18:56:41,354 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-06 18:56:41,354 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-06 18:56:41,355 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-06 18:56:41,355 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-06 18:56:41,355 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. 2023-06-06 18:56:41,355 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. started... 2023-06-06 18:56:41,355 INFO [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 6705c742d62f3c213cab19deb06164dc 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-06 18:56:41,370 INFO [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=13 (bloomFilter=true), to=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/.tmp/info/a73dce62a8af49478f683a076d87d1bf 2023-06-06 18:56:41,377 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/.tmp/info/a73dce62a8af49478f683a076d87d1bf as hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info/a73dce62a8af49478f683a076d87d1bf 2023-06-06 18:56:41,383 INFO [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info/a73dce62a8af49478f683a076d87d1bf, entries=1, sequenceid=13, filesize=5.8 K 2023-06-06 18:56:41,384 INFO [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 6705c742d62f3c213cab19deb06164dc in 29ms, sequenceid=13, compaction requested=true 2023-06-06 18:56:41,384 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 6705c742d62f3c213cab19deb06164dc: 2023-06-06 18:56:41,384 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. 2023-06-06 18:56:41,384 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-06 18:56:41,384 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-06 18:56:41,384 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:41,384 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-06 18:56:41,384 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,36267,1686077759897' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-06 18:56:41,386 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:41,386 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,386 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:41,386 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-06 18:56:41,386 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-06 18:56:41,386 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,386 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-06 18:56:41,386 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-06 18:56:41,387 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-06 18:56:41,387 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,387 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:41,387 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-06 18:56:41,388 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,36267,1686077759897' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-06 18:56:41,388 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@5edd5cfa[Count = 0] remaining members to acquire global barrier 2023-06-06 18:56:41,388 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-06 18:56:41,388 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,388 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,388 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,388 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,388 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:41,389 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-06 18:56:41,388 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-06 18:56:41,389 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-06 18:56:41,389 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase20.apache.org,36267,1686077759897' in zk 2023-06-06 18:56:41,390 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:41,390 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-06 18:56:41,390 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:41,390 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-06 18:56:41,390 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-06 18:56:41,390 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-06 18:56:41,390 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-06 18:56:41,391 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-06 18:56:41,391 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-06 18:56:41,391 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,392 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:41,392 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-06 18:56:41,392 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,392 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:41,393 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase20.apache.org,36267,1686077759897': 2023-06-06 18:56:41,393 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,36267,1686077759897' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-06 18:56:41,393 INFO [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-06 18:56:41,393 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-06 18:56:41,393 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-06 18:56:41,393 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,393 INFO [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-06 18:56:41,394 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,394 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,394 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,394 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-06 18:56:41,394 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,394 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,395 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-06 18:56:41,395 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-06 18:56:41,406 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-06 18:56:41,406 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-06 18:56:41,406 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-06 18:56:41,406 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:41,407 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,407 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,407 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-06 18:56:41,408 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,408 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:41,408 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:41,408 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-06 18:56:41,408 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,409 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:41,411 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:41,411 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,411 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,411 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:41,411 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,411 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-06 18:56:41,411 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-06-06 18:56:41,411 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-06 18:56:41,411 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,412 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-06 18:56:41,412 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-06 18:56:41,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-06 18:56:41,412 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-06 18:56:41,412 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-06 18:56:41,412 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:41,412 DEBUG [Listener at localhost.localdomain/43035] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-06 18:56:41,412 DEBUG [Listener at localhost.localdomain/43035] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-06 18:56:41,413 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-06 18:56:41,413 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-06 18:56:51,413 DEBUG [Listener at localhost.localdomain/43035] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-06 18:56:51,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-06 18:56:51,417 DEBUG [Listener at localhost.localdomain/43035] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-06 18:56:51,426 DEBUG [Listener at localhost.localdomain/43035] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 17769 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-06 18:56:51,426 DEBUG [Listener at localhost.localdomain/43035] regionserver.HStore(1912): 6705c742d62f3c213cab19deb06164dc/info is initiating minor compaction (all files) 2023-06-06 18:56:51,426 INFO [Listener at localhost.localdomain/43035] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-06 18:56:51,426 INFO [Listener at localhost.localdomain/43035] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-06 18:56:51,427 INFO [Listener at localhost.localdomain/43035] regionserver.HRegion(2259): Starting compaction of 6705c742d62f3c213cab19deb06164dc/info in TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. 2023-06-06 18:56:51,427 INFO [Listener at localhost.localdomain/43035] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info/edf6381478414cf29c3d54928a806f64, hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info/1afc4e72ee6f4976843108349410f832, hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info/a73dce62a8af49478f683a076d87d1bf] into tmpdir=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/.tmp, totalSize=17.4 K 2023-06-06 18:56:51,427 DEBUG [Listener at localhost.localdomain/43035] compactions.Compactor(207): Compacting edf6381478414cf29c3d54928a806f64, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=5, earliestPutTs=1686077781159 2023-06-06 18:56:51,428 DEBUG [Listener at localhost.localdomain/43035] compactions.Compactor(207): Compacting 1afc4e72ee6f4976843108349410f832, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=9, earliestPutTs=1686077791239 2023-06-06 18:56:51,428 DEBUG [Listener at localhost.localdomain/43035] compactions.Compactor(207): Compacting a73dce62a8af49478f683a076d87d1bf, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=13, earliestPutTs=1686077801316 2023-06-06 18:56:51,445 INFO [Listener at localhost.localdomain/43035] throttle.PressureAwareThroughputController(145): 6705c742d62f3c213cab19deb06164dc#info#compaction#19 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-06 18:56:51,466 DEBUG [Listener at localhost.localdomain/43035] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/.tmp/info/8354e5820b58490e9754ac6c50ddffc7 as hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info/8354e5820b58490e9754ac6c50ddffc7 2023-06-06 18:56:51,472 INFO [Listener at localhost.localdomain/43035] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 6705c742d62f3c213cab19deb06164dc/info of 6705c742d62f3c213cab19deb06164dc into 8354e5820b58490e9754ac6c50ddffc7(size=8.0 K), total size for store is 8.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-06 18:56:51,473 DEBUG [Listener at localhost.localdomain/43035] regionserver.HRegion(2289): Compaction status journal for 6705c742d62f3c213cab19deb06164dc: 2023-06-06 18:56:51,490 INFO [Listener at localhost.localdomain/43035] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/WALs/jenkins-hbase20.apache.org,36267,1686077759897/jenkins-hbase20.apache.org%2C36267%2C1686077759897.1686077801317 with entries=4, filesize=2.45 KB; new WAL /user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/WALs/jenkins-hbase20.apache.org,36267,1686077759897/jenkins-hbase20.apache.org%2C36267%2C1686077759897.1686077811474 2023-06-06 18:56:51,490 DEBUG [Listener at localhost.localdomain/43035] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40821,DS-bf122d3c-8e35-4781-8db0-3c94bbd680b2,DISK], DatanodeInfoWithStorage[127.0.0.1:36005,DS-cc51ac99-6f43-4a35-8a2d-8b28b506585e,DISK]] 2023-06-06 18:56:51,490 DEBUG [Listener at localhost.localdomain/43035] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/WALs/jenkins-hbase20.apache.org,36267,1686077759897/jenkins-hbase20.apache.org%2C36267%2C1686077759897.1686077801317 is not closed yet, will try archiving it next time 2023-06-06 18:56:51,490 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/WALs/jenkins-hbase20.apache.org,36267,1686077759897/jenkins-hbase20.apache.org%2C36267%2C1686077759897.1686077760276 to hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/oldWALs/jenkins-hbase20.apache.org%2C36267%2C1686077759897.1686077760276 2023-06-06 18:56:51,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-06-06 18:56:51,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-06-06 18:56:51,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,498 INFO [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-06 18:56:51,498 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-06 18:56:51,498 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-06 18:56:51,498 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-06 18:56:51,499 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,499 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,501 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-06 18:56:51,501 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:51,501 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-06 18:56:51,501 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-06 18:56:51,501 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:51,501 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-06 18:56:51,501 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,501 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,502 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-06 18:56:51,502 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,502 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,504 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-06-06 18:56:51,504 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,504 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-06 18:56:51,504 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-06 18:56:51,505 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-06 18:56:51,505 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-06 18:56:51,505 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-06 18:56:51,505 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. 2023-06-06 18:56:51,505 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. started... 2023-06-06 18:56:51,505 INFO [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 6705c742d62f3c213cab19deb06164dc 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-06 18:56:51,518 INFO [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=18 (bloomFilter=true), to=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/.tmp/info/7cce5aef563d4491b1cec4e3e3c4ddb2 2023-06-06 18:56:51,524 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/.tmp/info/7cce5aef563d4491b1cec4e3e3c4ddb2 as hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info/7cce5aef563d4491b1cec4e3e3c4ddb2 2023-06-06 18:56:51,531 INFO [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info/7cce5aef563d4491b1cec4e3e3c4ddb2, entries=1, sequenceid=18, filesize=5.8 K 2023-06-06 18:56:51,533 INFO [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 6705c742d62f3c213cab19deb06164dc in 28ms, sequenceid=18, compaction requested=false 2023-06-06 18:56:51,533 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 6705c742d62f3c213cab19deb06164dc: 2023-06-06 18:56:51,533 DEBUG [rs(jenkins-hbase20.apache.org,36267,1686077759897)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. 2023-06-06 18:56:51,533 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-06 18:56:51,533 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-06 18:56:51,533 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:51,533 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-06 18:56:51,533 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,36267,1686077759897' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-06 18:56:51,535 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:51,535 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,535 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:51,535 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-06 18:56:51,536 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-06 18:56:51,536 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,536 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-06 18:56:51,536 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-06 18:56:51,536 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-06 18:56:51,537 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,537 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:51,538 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-06 18:56:51,538 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,36267,1686077759897' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-06 18:56:51,538 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@3b08ef91[Count = 0] remaining members to acquire global barrier 2023-06-06 18:56:51,538 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-06 18:56:51,539 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,539 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,540 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,540 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,540 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-06 18:56:51,540 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-06 18:56:51,540 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase20.apache.org,36267,1686077759897' in zk 2023-06-06 18:56:51,540 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:51,540 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-06 18:56:51,542 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-06 18:56:51,542 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:51,542 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-06 18:56:51,542 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:51,543 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-06 18:56:51,543 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-06 18:56:51,542 DEBUG [member: 'jenkins-hbase20.apache.org,36267,1686077759897' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-06 18:56:51,543 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-06 18:56:51,544 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-06 18:56:51,544 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,544 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:51,545 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-06 18:56:51,545 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,545 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:51,546 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase20.apache.org,36267,1686077759897': 2023-06-06 18:56:51,546 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,36267,1686077759897' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-06 18:56:51,546 INFO [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-06 18:56:51,546 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-06 18:56:51,546 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-06 18:56:51,546 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,546 INFO [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-06 18:56:51,561 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,561 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,562 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,562 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,562 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-06 18:56:51,562 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-06 18:56:51,562 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-06 18:56:51,562 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,562 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:51,562 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-06 18:56:51,562 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-06 18:56:51,562 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-06 18:56:51,563 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,563 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,563 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-06 18:56:51,563 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,563 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:51,563 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:51,564 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-06 18:56:51,564 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,564 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:51,570 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:51,570 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-06 18:56:51,570 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,570 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-06 18:56:51,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-06 18:56:51,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-06 18:56:51,570 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-06 18:56:51,570 DEBUG [(jenkins-hbase20.apache.org,46631,1686077759858)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-06 18:56:51,570 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-06 18:56:51,570 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-06-06 18:56:51,571 DEBUG [Listener at localhost.localdomain/43035] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-06 18:56:51,570 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,571 DEBUG [Listener at localhost.localdomain/43035] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-06 18:56:51,571 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:56:51,571 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-06 18:56:51,571 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,571 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-06 18:56:51,571 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:56:51,571 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-06 18:57:01,571 DEBUG [Listener at localhost.localdomain/43035] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-06 18:57:01,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46631] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-06 18:57:01,594 INFO [Listener at localhost.localdomain/43035] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/WALs/jenkins-hbase20.apache.org,36267,1686077759897/jenkins-hbase20.apache.org%2C36267%2C1686077759897.1686077811474 with entries=3, filesize=1.97 KB; new WAL /user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/WALs/jenkins-hbase20.apache.org,36267,1686077759897/jenkins-hbase20.apache.org%2C36267%2C1686077759897.1686077821579 2023-06-06 18:57:01,594 DEBUG [Listener at localhost.localdomain/43035] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40821,DS-bf122d3c-8e35-4781-8db0-3c94bbd680b2,DISK], DatanodeInfoWithStorage[127.0.0.1:36005,DS-cc51ac99-6f43-4a35-8a2d-8b28b506585e,DISK]] 2023-06-06 18:57:01,595 DEBUG [Listener at localhost.localdomain/43035] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/WALs/jenkins-hbase20.apache.org,36267,1686077759897/jenkins-hbase20.apache.org%2C36267%2C1686077759897.1686077811474 is not closed yet, will try archiving it next time 2023-06-06 18:57:01,595 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-06 18:57:01,595 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/WALs/jenkins-hbase20.apache.org,36267,1686077759897/jenkins-hbase20.apache.org%2C36267%2C1686077759897.1686077801317 to hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/oldWALs/jenkins-hbase20.apache.org%2C36267%2C1686077759897.1686077801317 2023-06-06 18:57:01,595 INFO [Listener at localhost.localdomain/43035] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-06-06 18:57:01,595 DEBUG [Listener at localhost.localdomain/43035] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5cd5f104 to 127.0.0.1:52238 2023-06-06 18:57:01,597 DEBUG [Listener at localhost.localdomain/43035] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:57:01,598 DEBUG [Listener at localhost.localdomain/43035] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-06 18:57:01,598 DEBUG [Listener at localhost.localdomain/43035] util.JVMClusterUtil(257): Found active master hash=237670933, stopped=false 2023-06-06 18:57:01,598 INFO [Listener at localhost.localdomain/43035] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,46631,1686077759858 2023-06-06 18:57:01,600 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-06 18:57:01,600 INFO [Listener at localhost.localdomain/43035] procedure2.ProcedureExecutor(629): Stopping 2023-06-06 18:57:01,600 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-06 18:57:01,601 DEBUG [Listener at localhost.localdomain/43035] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5737fd83 to 127.0.0.1:52238 2023-06-06 18:57:01,600 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:57:01,602 DEBUG [Listener at localhost.localdomain/43035] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:57:01,602 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:57:01,602 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:57:01,602 INFO [Listener at localhost.localdomain/43035] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,36267,1686077759897' ***** 2023-06-06 18:57:01,602 INFO [Listener at localhost.localdomain/43035] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-06 18:57:01,603 INFO [RS:0;jenkins-hbase20:36267] regionserver.HeapMemoryManager(220): Stopping 2023-06-06 18:57:01,603 INFO [RS:0;jenkins-hbase20:36267] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-06 18:57:01,603 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-06 18:57:01,603 INFO [RS:0;jenkins-hbase20:36267] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-06 18:57:01,604 INFO [RS:0;jenkins-hbase20:36267] regionserver.HRegionServer(3303): Received CLOSE for 6705c742d62f3c213cab19deb06164dc 2023-06-06 18:57:01,604 INFO [RS:0;jenkins-hbase20:36267] regionserver.HRegionServer(3303): Received CLOSE for 63ef23b2f59301805ed9a536094f0e88 2023-06-06 18:57:01,604 INFO [RS:0;jenkins-hbase20:36267] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:57:01,604 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 6705c742d62f3c213cab19deb06164dc, disabling compactions & flushes 2023-06-06 18:57:01,604 DEBUG [RS:0;jenkins-hbase20:36267] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4b96d26b to 127.0.0.1:52238 2023-06-06 18:57:01,604 DEBUG [RS:0;jenkins-hbase20:36267] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:57:01,604 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. 2023-06-06 18:57:01,604 INFO [RS:0;jenkins-hbase20:36267] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-06 18:57:01,604 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. 2023-06-06 18:57:01,604 INFO [RS:0;jenkins-hbase20:36267] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-06 18:57:01,604 INFO [RS:0;jenkins-hbase20:36267] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-06 18:57:01,604 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. after waiting 0 ms 2023-06-06 18:57:01,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. 2023-06-06 18:57:01,605 INFO [RS:0;jenkins-hbase20:36267] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-06 18:57:01,605 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 6705c742d62f3c213cab19deb06164dc 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-06 18:57:01,605 INFO [RS:0;jenkins-hbase20:36267] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-06-06 18:57:01,605 DEBUG [RS:0;jenkins-hbase20:36267] regionserver.HRegionServer(1478): Online Regions={6705c742d62f3c213cab19deb06164dc=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc., 1588230740=hbase:meta,,1.1588230740, 63ef23b2f59301805ed9a536094f0e88=hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88.} 2023-06-06 18:57:01,605 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-06 18:57:01,605 DEBUG [RS:0;jenkins-hbase20:36267] regionserver.HRegionServer(1504): Waiting on 1588230740, 63ef23b2f59301805ed9a536094f0e88, 6705c742d62f3c213cab19deb06164dc 2023-06-06 18:57:01,606 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-06 18:57:01,606 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-06 18:57:01,606 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-06 18:57:01,606 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-06 18:57:01,606 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.10 KB heapSize=5.61 KB 2023-06-06 18:57:01,618 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.85 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/1588230740/.tmp/info/24b7e696dc65454b92af86a8d5b330a9 2023-06-06 18:57:01,621 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=22 (bloomFilter=true), to=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/.tmp/info/9f819079ce1246998f196240f3a7165a 2023-06-06 18:57:01,632 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/.tmp/info/9f819079ce1246998f196240f3a7165a as hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info/9f819079ce1246998f196240f3a7165a 2023-06-06 18:57:01,638 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info/9f819079ce1246998f196240f3a7165a, entries=1, sequenceid=22, filesize=5.8 K 2023-06-06 18:57:01,639 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 6705c742d62f3c213cab19deb06164dc in 34ms, sequenceid=22, compaction requested=true 2023-06-06 18:57:01,643 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info/edf6381478414cf29c3d54928a806f64, hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info/1afc4e72ee6f4976843108349410f832, hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info/a73dce62a8af49478f683a076d87d1bf] to archive 2023-06-06 18:57:01,644 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-06 18:57:01,647 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info/edf6381478414cf29c3d54928a806f64 to hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info/edf6381478414cf29c3d54928a806f64 2023-06-06 18:57:01,649 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info/1afc4e72ee6f4976843108349410f832 to hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info/1afc4e72ee6f4976843108349410f832 2023-06-06 18:57:01,650 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info/a73dce62a8af49478f683a076d87d1bf to hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/info/a73dce62a8af49478f683a076d87d1bf 2023-06-06 18:57:01,659 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=264 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/1588230740/.tmp/table/a9c7635b4b85427e98988682b2593365 2023-06-06 18:57:01,664 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/1588230740/.tmp/info/24b7e696dc65454b92af86a8d5b330a9 as hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/1588230740/info/24b7e696dc65454b92af86a8d5b330a9 2023-06-06 18:57:01,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/6705c742d62f3c213cab19deb06164dc/recovered.edits/25.seqid, newMaxSeqId=25, maxSeqId=1 2023-06-06 18:57:01,666 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. 2023-06-06 18:57:01,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 6705c742d62f3c213cab19deb06164dc: 2023-06-06 18:57:01,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686077761039.6705c742d62f3c213cab19deb06164dc. 2023-06-06 18:57:01,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 63ef23b2f59301805ed9a536094f0e88, disabling compactions & flushes 2023-06-06 18:57:01,666 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88. 2023-06-06 18:57:01,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88. 2023-06-06 18:57:01,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88. after waiting 0 ms 2023-06-06 18:57:01,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88. 2023-06-06 18:57:01,670 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/namespace/63ef23b2f59301805ed9a536094f0e88/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-06 18:57:01,672 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88. 2023-06-06 18:57:01,672 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 63ef23b2f59301805ed9a536094f0e88: 2023-06-06 18:57:01,672 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1686077760484.63ef23b2f59301805ed9a536094f0e88. 2023-06-06 18:57:01,673 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/1588230740/info/24b7e696dc65454b92af86a8d5b330a9, entries=20, sequenceid=14, filesize=7.6 K 2023-06-06 18:57:01,674 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/1588230740/.tmp/table/a9c7635b4b85427e98988682b2593365 as hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/1588230740/table/a9c7635b4b85427e98988682b2593365 2023-06-06 18:57:01,679 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/1588230740/table/a9c7635b4b85427e98988682b2593365, entries=4, sequenceid=14, filesize=4.9 K 2023-06-06 18:57:01,680 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.10 KB/3178, heapSize ~5.33 KB/5456, currentSize=0 B/0 for 1588230740 in 74ms, sequenceid=14, compaction requested=false 2023-06-06 18:57:01,687 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-06-06 18:57:01,688 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-06 18:57:01,689 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-06 18:57:01,689 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-06 18:57:01,689 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-06 18:57:01,806 INFO [RS:0;jenkins-hbase20:36267] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,36267,1686077759897; all regions closed. 2023-06-06 18:57:01,807 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/WALs/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:57:01,819 DEBUG [RS:0;jenkins-hbase20:36267] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/oldWALs 2023-06-06 18:57:01,819 INFO [RS:0;jenkins-hbase20:36267] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C36267%2C1686077759897.meta:.meta(num 1686077760425) 2023-06-06 18:57:01,819 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/WALs/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:57:01,826 DEBUG [RS:0;jenkins-hbase20:36267] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/oldWALs 2023-06-06 18:57:01,826 INFO [RS:0;jenkins-hbase20:36267] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C36267%2C1686077759897:(num 1686077821579) 2023-06-06 18:57:01,826 DEBUG [RS:0;jenkins-hbase20:36267] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:57:01,826 INFO [RS:0;jenkins-hbase20:36267] regionserver.LeaseManager(133): Closed leases 2023-06-06 18:57:01,827 INFO [RS:0;jenkins-hbase20:36267] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-06-06 18:57:01,827 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-06 18:57:01,828 INFO [RS:0;jenkins-hbase20:36267] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:36267 2023-06-06 18:57:01,831 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,36267,1686077759897 2023-06-06 18:57:01,831 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-06 18:57:01,831 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-06 18:57:01,832 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,36267,1686077759897] 2023-06-06 18:57:01,832 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,36267,1686077759897; numProcessing=1 2023-06-06 18:57:01,833 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,36267,1686077759897 already deleted, retry=false 2023-06-06 18:57:01,833 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,36267,1686077759897 expired; onlineServers=0 2023-06-06 18:57:01,833 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,46631,1686077759858' ***** 2023-06-06 18:57:01,833 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-06 18:57:01,834 DEBUG [M:0;jenkins-hbase20:46631] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@a151708, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-06 18:57:01,834 INFO [M:0;jenkins-hbase20:46631] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,46631,1686077759858 2023-06-06 18:57:01,834 INFO [M:0;jenkins-hbase20:46631] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,46631,1686077759858; all regions closed. 2023-06-06 18:57:01,834 DEBUG [M:0;jenkins-hbase20:46631] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:57:01,834 DEBUG [M:0;jenkins-hbase20:46631] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-06 18:57:01,834 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-06 18:57:01,834 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686077760058] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686077760058,5,FailOnTimeoutGroup] 2023-06-06 18:57:01,834 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686077760059] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686077760059,5,FailOnTimeoutGroup] 2023-06-06 18:57:01,834 DEBUG [M:0;jenkins-hbase20:46631] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-06 18:57:01,836 INFO [M:0;jenkins-hbase20:46631] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-06 18:57:01,836 INFO [M:0;jenkins-hbase20:46631] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-06 18:57:01,836 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-06 18:57:01,836 INFO [M:0;jenkins-hbase20:46631] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-06-06 18:57:01,836 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:57:01,837 DEBUG [M:0;jenkins-hbase20:46631] master.HMaster(1512): Stopping service threads 2023-06-06 18:57:01,837 INFO [M:0;jenkins-hbase20:46631] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-06 18:57:01,837 ERROR [M:0;jenkins-hbase20:46631] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-06 18:57:01,837 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-06 18:57:01,837 INFO [M:0;jenkins-hbase20:46631] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-06 18:57:01,837 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-06 18:57:01,838 DEBUG [M:0;jenkins-hbase20:46631] zookeeper.ZKUtil(398): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-06 18:57:01,838 WARN [M:0;jenkins-hbase20:46631] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-06 18:57:01,838 INFO [M:0;jenkins-hbase20:46631] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-06 18:57:01,838 INFO [M:0;jenkins-hbase20:46631] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-06 18:57:01,839 DEBUG [M:0;jenkins-hbase20:46631] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-06 18:57:01,839 INFO [M:0;jenkins-hbase20:46631] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:57:01,839 DEBUG [M:0;jenkins-hbase20:46631] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:57:01,839 DEBUG [M:0;jenkins-hbase20:46631] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-06 18:57:01,839 DEBUG [M:0;jenkins-hbase20:46631] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:57:01,839 INFO [M:0;jenkins-hbase20:46631] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.93 KB heapSize=47.38 KB 2023-06-06 18:57:01,852 INFO [M:0;jenkins-hbase20:46631] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.93 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7fdba132da41442d930f3781b851d236 2023-06-06 18:57:01,859 INFO [M:0;jenkins-hbase20:46631] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7fdba132da41442d930f3781b851d236 2023-06-06 18:57:01,860 DEBUG [M:0;jenkins-hbase20:46631] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7fdba132da41442d930f3781b851d236 as hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7fdba132da41442d930f3781b851d236 2023-06-06 18:57:01,869 INFO [M:0;jenkins-hbase20:46631] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7fdba132da41442d930f3781b851d236 2023-06-06 18:57:01,870 INFO [M:0;jenkins-hbase20:46631] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34445/user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7fdba132da41442d930f3781b851d236, entries=11, sequenceid=100, filesize=6.1 K 2023-06-06 18:57:01,871 INFO [M:0;jenkins-hbase20:46631] regionserver.HRegion(2948): Finished flush of dataSize ~38.93 KB/39866, heapSize ~47.36 KB/48496, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 32ms, sequenceid=100, compaction requested=false 2023-06-06 18:57:01,872 INFO [M:0;jenkins-hbase20:46631] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:57:01,872 DEBUG [M:0;jenkins-hbase20:46631] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-06 18:57:01,873 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/b1b32c1a-a502-1fef-29c8-5ad06387cee9/MasterData/WALs/jenkins-hbase20.apache.org,46631,1686077759858 2023-06-06 18:57:01,877 INFO [M:0;jenkins-hbase20:46631] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-06 18:57:01,877 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-06 18:57:01,878 INFO [M:0;jenkins-hbase20:46631] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:46631 2023-06-06 18:57:01,879 DEBUG [M:0;jenkins-hbase20:46631] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,46631,1686077759858 already deleted, retry=false 2023-06-06 18:57:01,933 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:57:01,933 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): regionserver:36267-0x101c1c6f98a0001, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:57:01,933 INFO [RS:0;jenkins-hbase20:36267] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,36267,1686077759897; zookeeper connection closed. 2023-06-06 18:57:01,933 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@d4671ca] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@d4671ca 2023-06-06 18:57:01,934 INFO [Listener at localhost.localdomain/43035] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-06 18:57:02,033 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:57:02,033 DEBUG [Listener at localhost.localdomain/43035-EventThread] zookeeper.ZKWatcher(600): master:46631-0x101c1c6f98a0000, quorum=127.0.0.1:52238, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:57:02,033 INFO [M:0;jenkins-hbase20:46631] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,46631,1686077759858; zookeeper connection closed. 2023-06-06 18:57:02,035 WARN [Listener at localhost.localdomain/43035] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-06 18:57:02,043 INFO [Listener at localhost.localdomain/43035] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-06 18:57:02,150 WARN [BP-426883111-148.251.75.209-1686077759399 heartbeating to localhost.localdomain/127.0.0.1:34445] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-06 18:57:02,150 WARN [BP-426883111-148.251.75.209-1686077759399 heartbeating to localhost.localdomain/127.0.0.1:34445] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-426883111-148.251.75.209-1686077759399 (Datanode Uuid b17c6be7-4289-4c69-85fa-c661ddfcc6c5) service to localhost.localdomain/127.0.0.1:34445 2023-06-06 18:57:02,151 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/cluster_319be9c4-03b4-54cb-6941-a69454e072a8/dfs/data/data3/current/BP-426883111-148.251.75.209-1686077759399] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:57:02,151 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/cluster_319be9c4-03b4-54cb-6941-a69454e072a8/dfs/data/data4/current/BP-426883111-148.251.75.209-1686077759399] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:57:02,153 WARN [Listener at localhost.localdomain/43035] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-06 18:57:02,156 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-06 18:57:02,157 INFO [Listener at localhost.localdomain/43035] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-06 18:57:02,263 WARN [BP-426883111-148.251.75.209-1686077759399 heartbeating to localhost.localdomain/127.0.0.1:34445] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-06 18:57:02,263 WARN [BP-426883111-148.251.75.209-1686077759399 heartbeating to localhost.localdomain/127.0.0.1:34445] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-426883111-148.251.75.209-1686077759399 (Datanode Uuid 69210f3d-4856-434b-90cf-62ab5a4837ca) service to localhost.localdomain/127.0.0.1:34445 2023-06-06 18:57:02,264 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/cluster_319be9c4-03b4-54cb-6941-a69454e072a8/dfs/data/data1/current/BP-426883111-148.251.75.209-1686077759399] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:57:02,264 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/cluster_319be9c4-03b4-54cb-6941-a69454e072a8/dfs/data/data2/current/BP-426883111-148.251.75.209-1686077759399] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:57:02,278 INFO [Listener at localhost.localdomain/43035] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-06-06 18:57:02,395 INFO [Listener at localhost.localdomain/43035] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-06 18:57:02,416 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-06 18:57:02,424 INFO [Listener at localhost.localdomain/43035] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=92 (was 86) - Thread LEAK? -, OpenFileDescriptor=498 (was 459) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=110 (was 73) - SystemLoadAverage LEAK? -, ProcessCount=166 (was 170), AvailableMemoryMB=5532 (was 4969) - AvailableMemoryMB LEAK? - 2023-06-06 18:57:02,431 INFO [Listener at localhost.localdomain/43035] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRolling Thread=93, OpenFileDescriptor=498, MaxFileDescriptor=60000, SystemLoadAverage=110, ProcessCount=166, AvailableMemoryMB=5532 2023-06-06 18:57:02,431 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-06 18:57:02,432 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/hadoop.log.dir so I do NOT create it in target/test-data/00181bf5-2696-27d6-9420-4935fde89394 2023-06-06 18:57:02,432 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f1917c38-618e-b315-5f88-d1bc77eaa21e/hadoop.tmp.dir so I do NOT create it in target/test-data/00181bf5-2696-27d6-9420-4935fde89394 2023-06-06 18:57:02,432 INFO [Listener at localhost.localdomain/43035] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/cluster_22b25afb-8853-3f38-2fdd-fe14f7b08979, deleteOnExit=true 2023-06-06 18:57:02,432 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-06 18:57:02,432 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/test.cache.data in system properties and HBase conf 2023-06-06 18:57:02,432 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/hadoop.tmp.dir in system properties and HBase conf 2023-06-06 18:57:02,432 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/hadoop.log.dir in system properties and HBase conf 2023-06-06 18:57:02,432 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-06 18:57:02,433 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-06 18:57:02,433 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-06 18:57:02,433 DEBUG [Listener at localhost.localdomain/43035] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-06 18:57:02,433 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-06 18:57:02,433 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-06 18:57:02,433 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-06 18:57:02,433 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-06 18:57:02,434 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-06 18:57:02,434 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-06 18:57:02,434 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-06 18:57:02,434 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-06 18:57:02,434 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-06 18:57:02,434 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/nfs.dump.dir in system properties and HBase conf 2023-06-06 18:57:02,434 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/java.io.tmpdir in system properties and HBase conf 2023-06-06 18:57:02,434 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-06 18:57:02,435 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-06 18:57:02,435 INFO [Listener at localhost.localdomain/43035] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-06 18:57:02,436 WARN [Listener at localhost.localdomain/43035] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-06 18:57:02,438 WARN [Listener at localhost.localdomain/43035] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-06 18:57:02,438 WARN [Listener at localhost.localdomain/43035] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-06 18:57:02,460 WARN [Listener at localhost.localdomain/43035] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:57:02,462 INFO [Listener at localhost.localdomain/43035] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:57:02,466 INFO [Listener at localhost.localdomain/43035] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/java.io.tmpdir/Jetty_localhost_localdomain_43077_hdfs____xnqmmj/webapp 2023-06-06 18:57:02,537 INFO [Listener at localhost.localdomain/43035] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:43077 2023-06-06 18:57:02,538 WARN [Listener at localhost.localdomain/43035] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-06 18:57:02,573 WARN [Listener at localhost.localdomain/43035] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-06 18:57:02,576 WARN [Listener at localhost.localdomain/43035] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-06 18:57:02,601 WARN [Listener at localhost.localdomain/33225] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:57:02,612 WARN [Listener at localhost.localdomain/33225] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-06 18:57:02,614 WARN [Listener at localhost.localdomain/33225] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:57:02,615 INFO [Listener at localhost.localdomain/33225] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:57:02,621 INFO [Listener at localhost.localdomain/33225] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/java.io.tmpdir/Jetty_localhost_33729_datanode____8j24w5/webapp 2023-06-06 18:57:02,696 INFO [Listener at localhost.localdomain/33225] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33729 2023-06-06 18:57:02,702 WARN [Listener at localhost.localdomain/37013] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:57:02,723 WARN [Listener at localhost.localdomain/37013] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-06 18:57:02,726 WARN [Listener at localhost.localdomain/37013] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:57:02,727 INFO [Listener at localhost.localdomain/37013] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:57:02,730 INFO [Listener at localhost.localdomain/37013] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/java.io.tmpdir/Jetty_localhost_37603_datanode____.4c8dz8/webapp 2023-06-06 18:57:02,775 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6104f6d24c5c8eb7: Processing first storage report for DS-8edfa759-99c5-4566-a73c-9e06560934ca from datanode a4b89957-1c17-4585-9fb2-e530f992d50a 2023-06-06 18:57:02,775 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6104f6d24c5c8eb7: from storage DS-8edfa759-99c5-4566-a73c-9e06560934ca node DatanodeRegistration(127.0.0.1:32913, datanodeUuid=a4b89957-1c17-4585-9fb2-e530f992d50a, infoPort=34381, infoSecurePort=0, ipcPort=37013, storageInfo=lv=-57;cid=testClusterID;nsid=683919755;c=1686077822440), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:57:02,775 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6104f6d24c5c8eb7: Processing first storage report for DS-76b615b9-cdde-40b6-824b-b7df88f8b0d1 from datanode a4b89957-1c17-4585-9fb2-e530f992d50a 2023-06-06 18:57:02,775 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6104f6d24c5c8eb7: from storage DS-76b615b9-cdde-40b6-824b-b7df88f8b0d1 node DatanodeRegistration(127.0.0.1:32913, datanodeUuid=a4b89957-1c17-4585-9fb2-e530f992d50a, infoPort=34381, infoSecurePort=0, ipcPort=37013, storageInfo=lv=-57;cid=testClusterID;nsid=683919755;c=1686077822440), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:57:02,806 INFO [Listener at localhost.localdomain/37013] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37603 2023-06-06 18:57:02,813 WARN [Listener at localhost.localdomain/32863] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:57:02,869 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xcdfaf3e407cf05cd: Processing first storage report for DS-8a7df69f-e623-48df-a3b3-5647280cfbe3 from datanode 55668ead-aee2-42e9-b1ef-2230f227b211 2023-06-06 18:57:02,869 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xcdfaf3e407cf05cd: from storage DS-8a7df69f-e623-48df-a3b3-5647280cfbe3 node DatanodeRegistration(127.0.0.1:45475, datanodeUuid=55668ead-aee2-42e9-b1ef-2230f227b211, infoPort=42503, infoSecurePort=0, ipcPort=32863, storageInfo=lv=-57;cid=testClusterID;nsid=683919755;c=1686077822440), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:57:02,869 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xcdfaf3e407cf05cd: Processing first storage report for DS-e67ff557-c731-47a8-bbe5-6396c5e47585 from datanode 55668ead-aee2-42e9-b1ef-2230f227b211 2023-06-06 18:57:02,869 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xcdfaf3e407cf05cd: from storage DS-e67ff557-c731-47a8-bbe5-6396c5e47585 node DatanodeRegistration(127.0.0.1:45475, datanodeUuid=55668ead-aee2-42e9-b1ef-2230f227b211, infoPort=42503, infoSecurePort=0, ipcPort=32863, storageInfo=lv=-57;cid=testClusterID;nsid=683919755;c=1686077822440), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:57:02,920 DEBUG [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394 2023-06-06 18:57:02,922 INFO [Listener at localhost.localdomain/32863] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/cluster_22b25afb-8853-3f38-2fdd-fe14f7b08979/zookeeper_0, clientPort=55735, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/cluster_22b25afb-8853-3f38-2fdd-fe14f7b08979/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/cluster_22b25afb-8853-3f38-2fdd-fe14f7b08979/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-06 18:57:02,923 INFO [Listener at localhost.localdomain/32863] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=55735 2023-06-06 18:57:02,923 INFO [Listener at localhost.localdomain/32863] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:57:02,924 INFO [Listener at localhost.localdomain/32863] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:57:02,939 INFO [Listener at localhost.localdomain/32863] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457 with version=8 2023-06-06 18:57:02,939 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/hbase-staging 2023-06-06 18:57:02,941 INFO [Listener at localhost.localdomain/32863] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-06-06 18:57:02,941 INFO [Listener at localhost.localdomain/32863] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:57:02,941 INFO [Listener at localhost.localdomain/32863] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-06 18:57:02,942 INFO [Listener at localhost.localdomain/32863] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-06 18:57:02,942 INFO [Listener at localhost.localdomain/32863] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:57:02,942 INFO [Listener at localhost.localdomain/32863] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-06 18:57:02,942 INFO [Listener at localhost.localdomain/32863] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-06 18:57:02,943 INFO [Listener at localhost.localdomain/32863] ipc.NettyRpcServer(120): Bind to /148.251.75.209:33223 2023-06-06 18:57:02,944 INFO [Listener at localhost.localdomain/32863] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:57:02,944 INFO [Listener at localhost.localdomain/32863] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:57:02,945 INFO [Listener at localhost.localdomain/32863] zookeeper.RecoverableZooKeeper(93): Process identifier=master:33223 connecting to ZooKeeper ensemble=127.0.0.1:55735 2023-06-06 18:57:02,950 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:332230x0, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-06 18:57:02,950 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:33223-0x101c1c7eff50000 connected 2023-06-06 18:57:02,965 DEBUG [Listener at localhost.localdomain/32863] zookeeper.ZKUtil(164): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-06 18:57:02,965 DEBUG [Listener at localhost.localdomain/32863] zookeeper.ZKUtil(164): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:57:02,966 DEBUG [Listener at localhost.localdomain/32863] zookeeper.ZKUtil(164): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-06 18:57:02,966 DEBUG [Listener at localhost.localdomain/32863] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33223 2023-06-06 18:57:02,967 DEBUG [Listener at localhost.localdomain/32863] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33223 2023-06-06 18:57:02,967 DEBUG [Listener at localhost.localdomain/32863] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33223 2023-06-06 18:57:02,967 DEBUG [Listener at localhost.localdomain/32863] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33223 2023-06-06 18:57:02,967 DEBUG [Listener at localhost.localdomain/32863] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33223 2023-06-06 18:57:02,968 INFO [Listener at localhost.localdomain/32863] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457, hbase.cluster.distributed=false 2023-06-06 18:57:02,983 INFO [Listener at localhost.localdomain/32863] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-06-06 18:57:02,983 INFO [Listener at localhost.localdomain/32863] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:57:02,983 INFO [Listener at localhost.localdomain/32863] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-06 18:57:02,983 INFO [Listener at localhost.localdomain/32863] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-06 18:57:02,983 INFO [Listener at localhost.localdomain/32863] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:57:02,983 INFO [Listener at localhost.localdomain/32863] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-06 18:57:02,983 INFO [Listener at localhost.localdomain/32863] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-06 18:57:02,984 INFO [Listener at localhost.localdomain/32863] ipc.NettyRpcServer(120): Bind to /148.251.75.209:43527 2023-06-06 18:57:02,985 INFO [Listener at localhost.localdomain/32863] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-06 18:57:02,985 DEBUG [Listener at localhost.localdomain/32863] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-06 18:57:02,986 INFO [Listener at localhost.localdomain/32863] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:57:02,987 INFO [Listener at localhost.localdomain/32863] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:57:02,987 INFO [Listener at localhost.localdomain/32863] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43527 connecting to ZooKeeper ensemble=127.0.0.1:55735 2023-06-06 18:57:02,998 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): regionserver:435270x0, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-06 18:57:03,000 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43527-0x101c1c7eff50001 connected 2023-06-06 18:57:03,000 DEBUG [Listener at localhost.localdomain/32863] zookeeper.ZKUtil(164): regionserver:43527-0x101c1c7eff50001, quorum=127.0.0.1:55735, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-06 18:57:03,001 DEBUG [Listener at localhost.localdomain/32863] zookeeper.ZKUtil(164): regionserver:43527-0x101c1c7eff50001, quorum=127.0.0.1:55735, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:57:03,001 DEBUG [Listener at localhost.localdomain/32863] zookeeper.ZKUtil(164): regionserver:43527-0x101c1c7eff50001, quorum=127.0.0.1:55735, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-06 18:57:03,002 DEBUG [Listener at localhost.localdomain/32863] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43527 2023-06-06 18:57:03,002 DEBUG [Listener at localhost.localdomain/32863] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43527 2023-06-06 18:57:03,002 DEBUG [Listener at localhost.localdomain/32863] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43527 2023-06-06 18:57:03,002 DEBUG [Listener at localhost.localdomain/32863] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43527 2023-06-06 18:57:03,003 DEBUG [Listener at localhost.localdomain/32863] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43527 2023-06-06 18:57:03,004 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,33223,1686077822941 2023-06-06 18:57:03,011 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-06 18:57:03,012 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,33223,1686077822941 2023-06-06 18:57:03,015 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): regionserver:43527-0x101c1c7eff50001, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-06 18:57:03,015 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-06 18:57:03,015 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:57:03,017 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-06 18:57:03,018 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,33223,1686077822941 from backup master directory 2023-06-06 18:57:03,018 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-06 18:57:03,019 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,33223,1686077822941 2023-06-06 18:57:03,019 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-06 18:57:03,019 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-06 18:57:03,019 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,33223,1686077822941 2023-06-06 18:57:03,038 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/hbase.id with ID: 60c21637-3fc4-4a25-8ace-1a3a3e18cc12 2023-06-06 18:57:03,050 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:57:03,052 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:57:03,060 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x7c4daf0f to 127.0.0.1:55735 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-06 18:57:03,065 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3dd1456a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-06 18:57:03,066 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-06 18:57:03,066 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-06 18:57:03,067 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-06 18:57:03,069 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/MasterData/data/master/store-tmp 2023-06-06 18:57:03,077 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:57:03,077 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-06 18:57:03,077 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:57:03,077 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:57:03,078 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-06 18:57:03,078 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:57:03,078 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:57:03,078 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-06 18:57:03,078 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/MasterData/WALs/jenkins-hbase20.apache.org,33223,1686077822941 2023-06-06 18:57:03,081 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C33223%2C1686077822941, suffix=, logDir=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/MasterData/WALs/jenkins-hbase20.apache.org,33223,1686077822941, archiveDir=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/MasterData/oldWALs, maxLogs=10 2023-06-06 18:57:03,088 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/MasterData/WALs/jenkins-hbase20.apache.org,33223,1686077822941/jenkins-hbase20.apache.org%2C33223%2C1686077822941.1686077823082 2023-06-06 18:57:03,088 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32913,DS-8edfa759-99c5-4566-a73c-9e06560934ca,DISK], DatanodeInfoWithStorage[127.0.0.1:45475,DS-8a7df69f-e623-48df-a3b3-5647280cfbe3,DISK]] 2023-06-06 18:57:03,088 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-06 18:57:03,088 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:57:03,088 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:57:03,088 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:57:03,091 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:57:03,093 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-06 18:57:03,093 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-06 18:57:03,094 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:57:03,095 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:57:03,095 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:57:03,099 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:57:03,105 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-06 18:57:03,105 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=848624, jitterRate=0.07908198237419128}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-06 18:57:03,106 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-06 18:57:03,106 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-06 18:57:03,107 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-06 18:57:03,107 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-06 18:57:03,107 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-06 18:57:03,107 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-06 18:57:03,107 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-06 18:57:03,107 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-06 18:57:03,108 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-06 18:57:03,109 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-06 18:57:03,120 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-06 18:57:03,120 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-06 18:57:03,121 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-06 18:57:03,121 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-06 18:57:03,122 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-06 18:57:03,123 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:57:03,123 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-06 18:57:03,124 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-06 18:57:03,125 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-06 18:57:03,125 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-06 18:57:03,125 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): regionserver:43527-0x101c1c7eff50001, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-06 18:57:03,125 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:57:03,126 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,33223,1686077822941, sessionid=0x101c1c7eff50000, setting cluster-up flag (Was=false) 2023-06-06 18:57:03,129 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:57:03,131 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-06 18:57:03,132 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,33223,1686077822941 2023-06-06 18:57:03,134 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:57:03,137 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-06 18:57:03,138 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,33223,1686077822941 2023-06-06 18:57:03,138 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/.hbase-snapshot/.tmp 2023-06-06 18:57:03,141 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-06 18:57:03,142 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-06 18:57:03,142 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-06 18:57:03,142 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-06 18:57:03,142 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-06 18:57:03,142 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-06-06 18:57:03,142 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:57:03,142 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-06 18:57:03,142 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:57:03,143 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1686077853143 2023-06-06 18:57:03,144 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-06 18:57:03,144 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-06 18:57:03,144 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-06 18:57:03,144 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-06 18:57:03,144 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-06 18:57:03,144 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-06 18:57:03,144 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-06 18:57:03,145 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-06 18:57:03,145 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-06 18:57:03,145 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-06 18:57:03,145 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-06 18:57:03,145 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-06 18:57:03,145 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-06 18:57:03,145 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-06 18:57:03,146 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686077823146,5,FailOnTimeoutGroup] 2023-06-06 18:57:03,146 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686077823146,5,FailOnTimeoutGroup] 2023-06-06 18:57:03,146 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-06 18:57:03,146 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-06 18:57:03,146 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-06 18:57:03,146 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-06 18:57:03,147 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-06 18:57:03,159 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-06 18:57:03,160 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-06 18:57:03,160 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457 2023-06-06 18:57:03,167 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:57:03,168 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-06 18:57:03,169 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740/info 2023-06-06 18:57:03,170 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-06 18:57:03,170 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:57:03,170 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-06 18:57:03,172 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740/rep_barrier 2023-06-06 18:57:03,172 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-06 18:57:03,172 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:57:03,172 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-06 18:57:03,174 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740/table 2023-06-06 18:57:03,174 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-06 18:57:03,174 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:57:03,175 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740 2023-06-06 18:57:03,175 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740 2023-06-06 18:57:03,177 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-06 18:57:03,178 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-06 18:57:03,180 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-06 18:57:03,180 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=814788, jitterRate=0.036057353019714355}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-06 18:57:03,181 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-06 18:57:03,181 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-06 18:57:03,181 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-06 18:57:03,181 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-06 18:57:03,181 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-06 18:57:03,181 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-06 18:57:03,181 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-06 18:57:03,181 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-06 18:57:03,182 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-06 18:57:03,182 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-06 18:57:03,182 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-06 18:57:03,184 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-06 18:57:03,185 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-06 18:57:03,205 INFO [RS:0;jenkins-hbase20:43527] regionserver.HRegionServer(951): ClusterId : 60c21637-3fc4-4a25-8ace-1a3a3e18cc12 2023-06-06 18:57:03,206 DEBUG [RS:0;jenkins-hbase20:43527] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-06 18:57:03,208 DEBUG [RS:0;jenkins-hbase20:43527] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-06 18:57:03,208 DEBUG [RS:0;jenkins-hbase20:43527] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-06 18:57:03,210 DEBUG [RS:0;jenkins-hbase20:43527] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-06 18:57:03,211 DEBUG [RS:0;jenkins-hbase20:43527] zookeeper.ReadOnlyZKClient(139): Connect 0x12fe4ff4 to 127.0.0.1:55735 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-06 18:57:03,215 DEBUG [RS:0;jenkins-hbase20:43527] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1c39b3db, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-06 18:57:03,215 DEBUG [RS:0;jenkins-hbase20:43527] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@18062484, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-06 18:57:03,221 DEBUG [RS:0;jenkins-hbase20:43527] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:43527 2023-06-06 18:57:03,221 INFO [RS:0;jenkins-hbase20:43527] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-06 18:57:03,221 INFO [RS:0;jenkins-hbase20:43527] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-06 18:57:03,221 DEBUG [RS:0;jenkins-hbase20:43527] regionserver.HRegionServer(1022): About to register with Master. 2023-06-06 18:57:03,222 INFO [RS:0;jenkins-hbase20:43527] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,33223,1686077822941 with isa=jenkins-hbase20.apache.org/148.251.75.209:43527, startcode=1686077822982 2023-06-06 18:57:03,222 DEBUG [RS:0;jenkins-hbase20:43527] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-06 18:57:03,225 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:57439, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-06-06 18:57:03,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33223] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:57:03,227 DEBUG [RS:0;jenkins-hbase20:43527] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457 2023-06-06 18:57:03,227 DEBUG [RS:0;jenkins-hbase20:43527] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:33225 2023-06-06 18:57:03,227 DEBUG [RS:0;jenkins-hbase20:43527] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-06 18:57:03,229 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-06 18:57:03,229 DEBUG [RS:0;jenkins-hbase20:43527] zookeeper.ZKUtil(162): regionserver:43527-0x101c1c7eff50001, quorum=127.0.0.1:55735, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:57:03,229 WARN [RS:0;jenkins-hbase20:43527] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-06 18:57:03,229 INFO [RS:0;jenkins-hbase20:43527] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-06 18:57:03,229 DEBUG [RS:0;jenkins-hbase20:43527] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/WALs/jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:57:03,230 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,43527,1686077822982] 2023-06-06 18:57:03,233 DEBUG [RS:0;jenkins-hbase20:43527] zookeeper.ZKUtil(162): regionserver:43527-0x101c1c7eff50001, quorum=127.0.0.1:55735, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:57:03,234 DEBUG [RS:0;jenkins-hbase20:43527] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-06 18:57:03,234 INFO [RS:0;jenkins-hbase20:43527] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-06 18:57:03,235 INFO [RS:0;jenkins-hbase20:43527] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-06 18:57:03,236 INFO [RS:0;jenkins-hbase20:43527] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-06 18:57:03,236 INFO [RS:0;jenkins-hbase20:43527] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-06 18:57:03,236 INFO [RS:0;jenkins-hbase20:43527] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-06 18:57:03,237 INFO [RS:0;jenkins-hbase20:43527] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-06 18:57:03,237 DEBUG [RS:0;jenkins-hbase20:43527] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:57:03,238 DEBUG [RS:0;jenkins-hbase20:43527] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:57:03,238 DEBUG [RS:0;jenkins-hbase20:43527] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:57:03,238 DEBUG [RS:0;jenkins-hbase20:43527] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:57:03,238 DEBUG [RS:0;jenkins-hbase20:43527] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:57:03,238 DEBUG [RS:0;jenkins-hbase20:43527] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-06 18:57:03,238 DEBUG [RS:0;jenkins-hbase20:43527] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:57:03,238 DEBUG [RS:0;jenkins-hbase20:43527] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:57:03,238 DEBUG [RS:0;jenkins-hbase20:43527] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:57:03,238 DEBUG [RS:0;jenkins-hbase20:43527] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:57:03,238 INFO [RS:0;jenkins-hbase20:43527] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-06 18:57:03,239 INFO [RS:0;jenkins-hbase20:43527] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-06 18:57:03,239 INFO [RS:0;jenkins-hbase20:43527] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-06 18:57:03,249 INFO [RS:0;jenkins-hbase20:43527] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-06 18:57:03,249 INFO [RS:0;jenkins-hbase20:43527] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,43527,1686077822982-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-06 18:57:03,259 INFO [RS:0;jenkins-hbase20:43527] regionserver.Replication(203): jenkins-hbase20.apache.org,43527,1686077822982 started 2023-06-06 18:57:03,259 INFO [RS:0;jenkins-hbase20:43527] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,43527,1686077822982, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:43527, sessionid=0x101c1c7eff50001 2023-06-06 18:57:03,259 DEBUG [RS:0;jenkins-hbase20:43527] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-06 18:57:03,259 DEBUG [RS:0;jenkins-hbase20:43527] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:57:03,259 DEBUG [RS:0;jenkins-hbase20:43527] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,43527,1686077822982' 2023-06-06 18:57:03,259 DEBUG [RS:0;jenkins-hbase20:43527] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-06 18:57:03,260 DEBUG [RS:0;jenkins-hbase20:43527] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-06 18:57:03,260 DEBUG [RS:0;jenkins-hbase20:43527] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-06 18:57:03,260 DEBUG [RS:0;jenkins-hbase20:43527] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-06 18:57:03,260 DEBUG [RS:0;jenkins-hbase20:43527] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:57:03,260 DEBUG [RS:0;jenkins-hbase20:43527] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,43527,1686077822982' 2023-06-06 18:57:03,260 DEBUG [RS:0;jenkins-hbase20:43527] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-06 18:57:03,261 DEBUG [RS:0;jenkins-hbase20:43527] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-06 18:57:03,261 DEBUG [RS:0;jenkins-hbase20:43527] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-06 18:57:03,261 INFO [RS:0;jenkins-hbase20:43527] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-06 18:57:03,261 INFO [RS:0;jenkins-hbase20:43527] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-06 18:57:03,335 DEBUG [jenkins-hbase20:33223] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-06 18:57:03,336 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,43527,1686077822982, state=OPENING 2023-06-06 18:57:03,338 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-06 18:57:03,338 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:57:03,339 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,43527,1686077822982}] 2023-06-06 18:57:03,339 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-06 18:57:03,364 INFO [RS:0;jenkins-hbase20:43527] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C43527%2C1686077822982, suffix=, logDir=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/WALs/jenkins-hbase20.apache.org,43527,1686077822982, archiveDir=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/oldWALs, maxLogs=32 2023-06-06 18:57:03,375 INFO [RS:0;jenkins-hbase20:43527] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/WALs/jenkins-hbase20.apache.org,43527,1686077822982/jenkins-hbase20.apache.org%2C43527%2C1686077822982.1686077823364 2023-06-06 18:57:03,375 DEBUG [RS:0;jenkins-hbase20:43527] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45475,DS-8a7df69f-e623-48df-a3b3-5647280cfbe3,DISK], DatanodeInfoWithStorage[127.0.0.1:32913,DS-8edfa759-99c5-4566-a73c-9e06560934ca,DISK]] 2023-06-06 18:57:03,495 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:57:03,495 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-06 18:57:03,498 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:44104, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-06 18:57:03,503 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-06 18:57:03,503 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-06 18:57:03,507 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C43527%2C1686077822982.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/WALs/jenkins-hbase20.apache.org,43527,1686077822982, archiveDir=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/oldWALs, maxLogs=32 2023-06-06 18:57:03,518 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/WALs/jenkins-hbase20.apache.org,43527,1686077822982/jenkins-hbase20.apache.org%2C43527%2C1686077822982.meta.1686077823507.meta 2023-06-06 18:57:03,518 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32913,DS-8edfa759-99c5-4566-a73c-9e06560934ca,DISK], DatanodeInfoWithStorage[127.0.0.1:45475,DS-8a7df69f-e623-48df-a3b3-5647280cfbe3,DISK]] 2023-06-06 18:57:03,518 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-06 18:57:03,519 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-06 18:57:03,519 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-06 18:57:03,519 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-06 18:57:03,519 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-06 18:57:03,519 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:57:03,520 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-06 18:57:03,520 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-06 18:57:03,522 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-06 18:57:03,523 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740/info 2023-06-06 18:57:03,523 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740/info 2023-06-06 18:57:03,524 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-06 18:57:03,525 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:57:03,525 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-06 18:57:03,526 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740/rep_barrier 2023-06-06 18:57:03,526 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740/rep_barrier 2023-06-06 18:57:03,527 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-06 18:57:03,528 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:57:03,528 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-06 18:57:03,529 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740/table 2023-06-06 18:57:03,529 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740/table 2023-06-06 18:57:03,529 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-06 18:57:03,530 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:57:03,531 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740 2023-06-06 18:57:03,532 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740 2023-06-06 18:57:03,534 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-06 18:57:03,536 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-06 18:57:03,540 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=883643, jitterRate=0.12361142039299011}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-06 18:57:03,541 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-06 18:57:03,543 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1686077823495 2023-06-06 18:57:03,549 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-06 18:57:03,550 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-06 18:57:03,551 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,43527,1686077822982, state=OPEN 2023-06-06 18:57:03,555 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-06 18:57:03,555 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-06 18:57:03,558 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-06 18:57:03,558 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,43527,1686077822982 in 216 msec 2023-06-06 18:57:03,560 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-06 18:57:03,561 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 376 msec 2023-06-06 18:57:03,564 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 423 msec 2023-06-06 18:57:03,564 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1686077823564, completionTime=-1 2023-06-06 18:57:03,564 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-06 18:57:03,564 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-06 18:57:03,569 DEBUG [hconnection-0x35c58b23-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-06 18:57:03,573 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:44110, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-06 18:57:03,574 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-06 18:57:03,574 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1686077883574 2023-06-06 18:57:03,575 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1686077943575 2023-06-06 18:57:03,575 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 10 msec 2023-06-06 18:57:03,580 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33223,1686077822941-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-06 18:57:03,580 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33223,1686077822941-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-06 18:57:03,580 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33223,1686077822941-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-06 18:57:03,581 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:33223, period=300000, unit=MILLISECONDS is enabled. 2023-06-06 18:57:03,581 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-06 18:57:03,581 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-06 18:57:03,581 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-06 18:57:03,582 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-06 18:57:03,582 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-06 18:57:03,584 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-06 18:57:03,584 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-06 18:57:03,586 DEBUG [HFileArchiver-9] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/.tmp/data/hbase/namespace/2b29e6e13d59edb5c32c367408459dc7 2023-06-06 18:57:03,587 DEBUG [HFileArchiver-9] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/.tmp/data/hbase/namespace/2b29e6e13d59edb5c32c367408459dc7 empty. 2023-06-06 18:57:03,587 DEBUG [HFileArchiver-9] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/.tmp/data/hbase/namespace/2b29e6e13d59edb5c32c367408459dc7 2023-06-06 18:57:03,587 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-06 18:57:03,600 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-06 18:57:03,601 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2b29e6e13d59edb5c32c367408459dc7, NAME => 'hbase:namespace,,1686077823581.2b29e6e13d59edb5c32c367408459dc7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/.tmp 2023-06-06 18:57:03,610 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686077823581.2b29e6e13d59edb5c32c367408459dc7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:57:03,610 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 2b29e6e13d59edb5c32c367408459dc7, disabling compactions & flushes 2023-06-06 18:57:03,610 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686077823581.2b29e6e13d59edb5c32c367408459dc7. 2023-06-06 18:57:03,610 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686077823581.2b29e6e13d59edb5c32c367408459dc7. 2023-06-06 18:57:03,610 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686077823581.2b29e6e13d59edb5c32c367408459dc7. after waiting 0 ms 2023-06-06 18:57:03,610 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686077823581.2b29e6e13d59edb5c32c367408459dc7. 2023-06-06 18:57:03,610 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686077823581.2b29e6e13d59edb5c32c367408459dc7. 2023-06-06 18:57:03,610 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 2b29e6e13d59edb5c32c367408459dc7: 2023-06-06 18:57:03,613 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-06 18:57:03,614 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1686077823581.2b29e6e13d59edb5c32c367408459dc7.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686077823613"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686077823613"}]},"ts":"1686077823613"} 2023-06-06 18:57:03,616 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-06 18:57:03,617 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-06 18:57:03,617 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686077823617"}]},"ts":"1686077823617"} 2023-06-06 18:57:03,618 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-06 18:57:03,622 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=2b29e6e13d59edb5c32c367408459dc7, ASSIGN}] 2023-06-06 18:57:03,624 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=2b29e6e13d59edb5c32c367408459dc7, ASSIGN 2023-06-06 18:57:03,625 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=2b29e6e13d59edb5c32c367408459dc7, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,43527,1686077822982; forceNewPlan=false, retain=false 2023-06-06 18:57:03,778 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=2b29e6e13d59edb5c32c367408459dc7, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:57:03,778 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1686077823581.2b29e6e13d59edb5c32c367408459dc7.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686077823778"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686077823778"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686077823778"}]},"ts":"1686077823778"} 2023-06-06 18:57:03,780 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 2b29e6e13d59edb5c32c367408459dc7, server=jenkins-hbase20.apache.org,43527,1686077822982}] 2023-06-06 18:57:03,936 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1686077823581.2b29e6e13d59edb5c32c367408459dc7. 2023-06-06 18:57:03,937 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2b29e6e13d59edb5c32c367408459dc7, NAME => 'hbase:namespace,,1686077823581.2b29e6e13d59edb5c32c367408459dc7.', STARTKEY => '', ENDKEY => ''} 2023-06-06 18:57:03,937 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 2b29e6e13d59edb5c32c367408459dc7 2023-06-06 18:57:03,937 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686077823581.2b29e6e13d59edb5c32c367408459dc7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:57:03,937 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 2b29e6e13d59edb5c32c367408459dc7 2023-06-06 18:57:03,937 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 2b29e6e13d59edb5c32c367408459dc7 2023-06-06 18:57:03,939 INFO [StoreOpener-2b29e6e13d59edb5c32c367408459dc7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 2b29e6e13d59edb5c32c367408459dc7 2023-06-06 18:57:03,940 DEBUG [StoreOpener-2b29e6e13d59edb5c32c367408459dc7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/namespace/2b29e6e13d59edb5c32c367408459dc7/info 2023-06-06 18:57:03,940 DEBUG [StoreOpener-2b29e6e13d59edb5c32c367408459dc7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/namespace/2b29e6e13d59edb5c32c367408459dc7/info 2023-06-06 18:57:03,941 INFO [StoreOpener-2b29e6e13d59edb5c32c367408459dc7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2b29e6e13d59edb5c32c367408459dc7 columnFamilyName info 2023-06-06 18:57:03,941 INFO [StoreOpener-2b29e6e13d59edb5c32c367408459dc7-1] regionserver.HStore(310): Store=2b29e6e13d59edb5c32c367408459dc7/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:57:03,944 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/namespace/2b29e6e13d59edb5c32c367408459dc7 2023-06-06 18:57:03,944 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/namespace/2b29e6e13d59edb5c32c367408459dc7 2023-06-06 18:57:03,953 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 2b29e6e13d59edb5c32c367408459dc7 2023-06-06 18:57:03,955 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/namespace/2b29e6e13d59edb5c32c367408459dc7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-06 18:57:03,956 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 2b29e6e13d59edb5c32c367408459dc7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=762055, jitterRate=-0.030997097492218018}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-06 18:57:03,956 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 2b29e6e13d59edb5c32c367408459dc7: 2023-06-06 18:57:03,958 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1686077823581.2b29e6e13d59edb5c32c367408459dc7., pid=6, masterSystemTime=1686077823932 2023-06-06 18:57:03,970 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1686077823581.2b29e6e13d59edb5c32c367408459dc7. 2023-06-06 18:57:03,970 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1686077823581.2b29e6e13d59edb5c32c367408459dc7. 2023-06-06 18:57:03,971 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=2b29e6e13d59edb5c32c367408459dc7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:57:03,972 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1686077823581.2b29e6e13d59edb5c32c367408459dc7.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686077823971"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686077823971"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686077823971"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686077823971"}]},"ts":"1686077823971"} 2023-06-06 18:57:03,977 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-06 18:57:03,977 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 2b29e6e13d59edb5c32c367408459dc7, server=jenkins-hbase20.apache.org,43527,1686077822982 in 194 msec 2023-06-06 18:57:03,980 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-06 18:57:03,980 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=2b29e6e13d59edb5c32c367408459dc7, ASSIGN in 355 msec 2023-06-06 18:57:03,980 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-06 18:57:03,981 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686077823981"}]},"ts":"1686077823981"} 2023-06-06 18:57:03,982 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-06 18:57:03,985 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-06 18:57:03,985 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-06 18:57:03,985 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-06 18:57:03,986 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:57:03,989 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 404 msec 2023-06-06 18:57:03,991 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-06 18:57:04,007 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-06 18:57:04,015 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 23 msec 2023-06-06 18:57:04,024 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-06 18:57:04,035 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-06 18:57:04,040 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 15 msec 2023-06-06 18:57:04,057 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-06 18:57:04,058 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-06 18:57:04,058 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.038sec 2023-06-06 18:57:04,059 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-06 18:57:04,059 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-06 18:57:04,059 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-06 18:57:04,059 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33223,1686077822941-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-06 18:57:04,059 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33223,1686077822941-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-06 18:57:04,065 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-06 18:57:04,106 DEBUG [Listener at localhost.localdomain/32863] zookeeper.ReadOnlyZKClient(139): Connect 0x465a1f43 to 127.0.0.1:55735 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-06 18:57:04,112 DEBUG [Listener at localhost.localdomain/32863] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@530d81f9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-06 18:57:04,114 DEBUG [hconnection-0x6372d6d1-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-06 18:57:04,118 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:44126, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-06 18:57:04,122 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,33223,1686077822941 2023-06-06 18:57:04,122 INFO [Listener at localhost.localdomain/32863] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:57:04,128 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-06 18:57:04,129 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:57:04,130 INFO [Listener at localhost.localdomain/32863] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-06 18:57:04,133 DEBUG [Listener at localhost.localdomain/32863] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-06 18:57:04,140 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:39952, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-06 18:57:04,144 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33223] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-06 18:57:04,144 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33223] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-06 18:57:04,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33223] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-06 18:57:04,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33223] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRolling 2023-06-06 18:57:04,153 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-06-06 18:57:04,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33223] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRolling" procId is: 9 2023-06-06 18:57:04,154 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-06 18:57:04,155 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33223] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-06 18:57:04,160 DEBUG [HFileArchiver-10] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/.tmp/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80 2023-06-06 18:57:04,161 DEBUG [HFileArchiver-10] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/.tmp/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80 empty. 2023-06-06 18:57:04,161 DEBUG [HFileArchiver-10] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/.tmp/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80 2023-06-06 18:57:04,162 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRolling regions 2023-06-06 18:57:04,201 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/.tmp/data/default/TestLogRolling-testLogRolling/.tabledesc/.tableinfo.0000000001 2023-06-06 18:57:04,207 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2bbba3429b25e6edc94320062d822f80, NAME => 'TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/.tmp 2023-06-06 18:57:04,249 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:57:04,249 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1604): Closing 2bbba3429b25e6edc94320062d822f80, disabling compactions & flushes 2023-06-06 18:57:04,249 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80. 2023-06-06 18:57:04,249 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80. 2023-06-06 18:57:04,249 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80. after waiting 0 ms 2023-06-06 18:57:04,249 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80. 2023-06-06 18:57:04,249 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80. 2023-06-06 18:57:04,249 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for 2bbba3429b25e6edc94320062d822f80: 2023-06-06 18:57:04,253 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-06-06 18:57:04,254 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1686077824254"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686077824254"}]},"ts":"1686077824254"} 2023-06-06 18:57:04,259 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-06 18:57:04,260 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-06 18:57:04,260 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686077824260"}]},"ts":"1686077824260"} 2023-06-06 18:57:04,262 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLING in hbase:meta 2023-06-06 18:57:04,266 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=2bbba3429b25e6edc94320062d822f80, ASSIGN}] 2023-06-06 18:57:04,268 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=2bbba3429b25e6edc94320062d822f80, ASSIGN 2023-06-06 18:57:04,269 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=2bbba3429b25e6edc94320062d822f80, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,43527,1686077822982; forceNewPlan=false, retain=false 2023-06-06 18:57:04,421 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=2bbba3429b25e6edc94320062d822f80, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:57:04,421 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1686077824421"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686077824421"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686077824421"}]},"ts":"1686077824421"} 2023-06-06 18:57:04,423 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 2bbba3429b25e6edc94320062d822f80, server=jenkins-hbase20.apache.org,43527,1686077822982}] 2023-06-06 18:57:04,580 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80. 2023-06-06 18:57:04,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2bbba3429b25e6edc94320062d822f80, NAME => 'TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80.', STARTKEY => '', ENDKEY => ''} 2023-06-06 18:57:04,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 2bbba3429b25e6edc94320062d822f80 2023-06-06 18:57:04,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:57:04,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 2bbba3429b25e6edc94320062d822f80 2023-06-06 18:57:04,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 2bbba3429b25e6edc94320062d822f80 2023-06-06 18:57:04,583 INFO [StoreOpener-2bbba3429b25e6edc94320062d822f80-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 2bbba3429b25e6edc94320062d822f80 2023-06-06 18:57:04,585 DEBUG [StoreOpener-2bbba3429b25e6edc94320062d822f80-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info 2023-06-06 18:57:04,585 DEBUG [StoreOpener-2bbba3429b25e6edc94320062d822f80-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info 2023-06-06 18:57:04,585 INFO [StoreOpener-2bbba3429b25e6edc94320062d822f80-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2bbba3429b25e6edc94320062d822f80 columnFamilyName info 2023-06-06 18:57:04,586 INFO [StoreOpener-2bbba3429b25e6edc94320062d822f80-1] regionserver.HStore(310): Store=2bbba3429b25e6edc94320062d822f80/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:57:04,587 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80 2023-06-06 18:57:04,588 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80 2023-06-06 18:57:04,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 2bbba3429b25e6edc94320062d822f80 2023-06-06 18:57:04,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-06 18:57:04,594 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 2bbba3429b25e6edc94320062d822f80; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=765958, jitterRate=-0.026034235954284668}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-06 18:57:04,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 2bbba3429b25e6edc94320062d822f80: 2023-06-06 18:57:04,595 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80., pid=11, masterSystemTime=1686077824576 2023-06-06 18:57:04,597 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80. 2023-06-06 18:57:04,597 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80. 2023-06-06 18:57:04,598 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=2bbba3429b25e6edc94320062d822f80, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:57:04,598 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1686077824597"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686077824597"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686077824597"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686077824597"}]},"ts":"1686077824597"} 2023-06-06 18:57:04,602 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-06 18:57:04,602 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 2bbba3429b25e6edc94320062d822f80, server=jenkins-hbase20.apache.org,43527,1686077822982 in 176 msec 2023-06-06 18:57:04,605 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-06 18:57:04,605 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=2bbba3429b25e6edc94320062d822f80, ASSIGN in 337 msec 2023-06-06 18:57:04,606 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-06 18:57:04,606 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686077824606"}]},"ts":"1686077824606"} 2023-06-06 18:57:04,608 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLED in hbase:meta 2023-06-06 18:57:04,610 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-06-06 18:57:04,612 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRolling in 465 msec 2023-06-06 18:57:07,218 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-06 18:57:09,235 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-06 18:57:09,235 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-06 18:57:09,235 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRolling' 2023-06-06 18:57:14,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33223] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-06 18:57:14,157 INFO [Listener at localhost.localdomain/32863] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRolling, procId: 9 completed 2023-06-06 18:57:14,161 DEBUG [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRolling 2023-06-06 18:57:14,161 DEBUG [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80. 2023-06-06 18:57:14,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] regionserver.HRegion(9158): Flush requested on 2bbba3429b25e6edc94320062d822f80 2023-06-06 18:57:14,176 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 2bbba3429b25e6edc94320062d822f80 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-06 18:57:14,187 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/.tmp/info/9e21ad3f9ba44b49a23e138dc3af5fc7 2023-06-06 18:57:14,195 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/.tmp/info/9e21ad3f9ba44b49a23e138dc3af5fc7 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/9e21ad3f9ba44b49a23e138dc3af5fc7 2023-06-06 18:57:14,201 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/9e21ad3f9ba44b49a23e138dc3af5fc7, entries=7, sequenceid=11, filesize=12.1 K 2023-06-06 18:57:14,202 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for 2bbba3429b25e6edc94320062d822f80 in 26ms, sequenceid=11, compaction requested=false 2023-06-06 18:57:14,203 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 2bbba3429b25e6edc94320062d822f80: 2023-06-06 18:57:14,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] regionserver.HRegion(9158): Flush requested on 2bbba3429b25e6edc94320062d822f80 2023-06-06 18:57:14,204 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 2bbba3429b25e6edc94320062d822f80 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-06-06 18:57:14,217 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=34 (bloomFilter=true), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/.tmp/info/7a411fa37e3e4cb289ca2c663383b7b8 2023-06-06 18:57:14,224 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/.tmp/info/7a411fa37e3e4cb289ca2c663383b7b8 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/7a411fa37e3e4cb289ca2c663383b7b8 2023-06-06 18:57:14,231 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/7a411fa37e3e4cb289ca2c663383b7b8, entries=20, sequenceid=34, filesize=25.8 K 2023-06-06 18:57:14,232 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=5.25 KB/5380 for 2bbba3429b25e6edc94320062d822f80 in 28ms, sequenceid=34, compaction requested=false 2023-06-06 18:57:14,232 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 2bbba3429b25e6edc94320062d822f80: 2023-06-06 18:57:14,232 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=37.9 K, sizeToCheck=16.0 K 2023-06-06 18:57:14,232 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-06 18:57:14,232 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/7a411fa37e3e4cb289ca2c663383b7b8 because midkey is the same as first or last row 2023-06-06 18:57:16,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] regionserver.HRegion(9158): Flush requested on 2bbba3429b25e6edc94320062d822f80 2023-06-06 18:57:16,222 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 2bbba3429b25e6edc94320062d822f80 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-06 18:57:16,236 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=44 (bloomFilter=true), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/.tmp/info/97c551848df84ccbaf4ec2dedd921afa 2023-06-06 18:57:16,243 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/.tmp/info/97c551848df84ccbaf4ec2dedd921afa as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/97c551848df84ccbaf4ec2dedd921afa 2023-06-06 18:57:16,250 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/97c551848df84ccbaf4ec2dedd921afa, entries=7, sequenceid=44, filesize=12.1 K 2023-06-06 18:57:16,251 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=16.81 KB/17216 for 2bbba3429b25e6edc94320062d822f80 in 29ms, sequenceid=44, compaction requested=true 2023-06-06 18:57:16,251 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 2bbba3429b25e6edc94320062d822f80: 2023-06-06 18:57:16,251 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=50.0 K, sizeToCheck=16.0 K 2023-06-06 18:57:16,251 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-06 18:57:16,251 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/7a411fa37e3e4cb289ca2c663383b7b8 because midkey is the same as first or last row 2023-06-06 18:57:16,251 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-06 18:57:16,251 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-06 18:57:16,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] regionserver.HRegion(9158): Flush requested on 2bbba3429b25e6edc94320062d822f80 2023-06-06 18:57:16,252 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 2bbba3429b25e6edc94320062d822f80 1/1 column families, dataSize=17.86 KB heapSize=19.38 KB 2023-06-06 18:57:16,254 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 51218 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-06 18:57:16,254 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1912): 2bbba3429b25e6edc94320062d822f80/info is initiating minor compaction (all files) 2023-06-06 18:57:16,255 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 2bbba3429b25e6edc94320062d822f80/info in TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80. 2023-06-06 18:57:16,255 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/9e21ad3f9ba44b49a23e138dc3af5fc7, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/7a411fa37e3e4cb289ca2c663383b7b8, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/97c551848df84ccbaf4ec2dedd921afa] into tmpdir=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/.tmp, totalSize=50.0 K 2023-06-06 18:57:16,255 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.Compactor(207): Compacting 9e21ad3f9ba44b49a23e138dc3af5fc7, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1686077834167 2023-06-06 18:57:16,256 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.Compactor(207): Compacting 7a411fa37e3e4cb289ca2c663383b7b8, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=34, earliestPutTs=1686077834177 2023-06-06 18:57:16,257 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.Compactor(207): Compacting 97c551848df84ccbaf4ec2dedd921afa, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=44, earliestPutTs=1686077834204 2023-06-06 18:57:16,272 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=17.86 KB at sequenceid=64 (bloomFilter=true), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/.tmp/info/6f69c7f374ea483b9c953ed588e62851 2023-06-06 18:57:16,280 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/.tmp/info/6f69c7f374ea483b9c953ed588e62851 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/6f69c7f374ea483b9c953ed588e62851 2023-06-06 18:57:16,280 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] throttle.PressureAwareThroughputController(145): 2bbba3429b25e6edc94320062d822f80#info#compaction#29 average throughput is 17.44 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-06 18:57:16,282 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=2bbba3429b25e6edc94320062d822f80, server=jenkins-hbase20.apache.org,43527,1686077822982 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-06 18:57:16,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] ipc.CallRunner(144): callId: 72 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:44126 deadline: 1686077846282, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=2bbba3429b25e6edc94320062d822f80, server=jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:57:16,293 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/6f69c7f374ea483b9c953ed588e62851, entries=17, sequenceid=64, filesize=22.6 K 2023-06-06 18:57:16,294 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~17.86 KB/18292, heapSize ~19.36 KB/19824, currentSize=12.61 KB/12912 for 2bbba3429b25e6edc94320062d822f80 in 42ms, sequenceid=64, compaction requested=false 2023-06-06 18:57:16,294 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 2bbba3429b25e6edc94320062d822f80: 2023-06-06 18:57:16,294 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=72.7 K, sizeToCheck=16.0 K 2023-06-06 18:57:16,294 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-06 18:57:16,294 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/7a411fa37e3e4cb289ca2c663383b7b8 because midkey is the same as first or last row 2023-06-06 18:57:16,300 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/.tmp/info/055d3e8137544bd3b3955d2426975b51 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/055d3e8137544bd3b3955d2426975b51 2023-06-06 18:57:16,307 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 2bbba3429b25e6edc94320062d822f80/info of 2bbba3429b25e6edc94320062d822f80 into 055d3e8137544bd3b3955d2426975b51(size=40.7 K), total size for store is 63.3 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-06 18:57:16,307 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 2bbba3429b25e6edc94320062d822f80: 2023-06-06 18:57:16,307 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80., storeName=2bbba3429b25e6edc94320062d822f80/info, priority=13, startTime=1686077836251; duration=0sec 2023-06-06 18:57:16,308 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=63.3 K, sizeToCheck=16.0 K 2023-06-06 18:57:16,308 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-06 18:57:16,308 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/055d3e8137544bd3b3955d2426975b51 because midkey is the same as first or last row 2023-06-06 18:57:16,308 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-06 18:57:26,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] regionserver.HRegion(9158): Flush requested on 2bbba3429b25e6edc94320062d822f80 2023-06-06 18:57:26,346 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 2bbba3429b25e6edc94320062d822f80 1/1 column families, dataSize=13.66 KB heapSize=14.88 KB 2023-06-06 18:57:26,366 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=13.66 KB at sequenceid=81 (bloomFilter=true), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/.tmp/info/7d0fb3d8b4c84078ac790aab36f8ac91 2023-06-06 18:57:26,372 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/.tmp/info/7d0fb3d8b4c84078ac790aab36f8ac91 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/7d0fb3d8b4c84078ac790aab36f8ac91 2023-06-06 18:57:26,377 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/7d0fb3d8b4c84078ac790aab36f8ac91, entries=13, sequenceid=81, filesize=18.4 K 2023-06-06 18:57:26,378 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~13.66 KB/13988, heapSize ~14.86 KB/15216, currentSize=0 B/0 for 2bbba3429b25e6edc94320062d822f80 in 32ms, sequenceid=81, compaction requested=true 2023-06-06 18:57:26,378 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 2bbba3429b25e6edc94320062d822f80: 2023-06-06 18:57:26,378 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=81.7 K, sizeToCheck=16.0 K 2023-06-06 18:57:26,379 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-06 18:57:26,379 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/055d3e8137544bd3b3955d2426975b51 because midkey is the same as first or last row 2023-06-06 18:57:26,379 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-06-06 18:57:26,379 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-06 18:57:26,380 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 83703 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-06 18:57:26,380 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1912): 2bbba3429b25e6edc94320062d822f80/info is initiating minor compaction (all files) 2023-06-06 18:57:26,380 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 2bbba3429b25e6edc94320062d822f80/info in TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80. 2023-06-06 18:57:26,380 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/055d3e8137544bd3b3955d2426975b51, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/6f69c7f374ea483b9c953ed588e62851, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/7d0fb3d8b4c84078ac790aab36f8ac91] into tmpdir=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/.tmp, totalSize=81.7 K 2023-06-06 18:57:26,381 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.Compactor(207): Compacting 055d3e8137544bd3b3955d2426975b51, keycount=34, bloomtype=ROW, size=40.7 K, encoding=NONE, compression=NONE, seqNum=44, earliestPutTs=1686077834167 2023-06-06 18:57:26,381 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.Compactor(207): Compacting 6f69c7f374ea483b9c953ed588e62851, keycount=17, bloomtype=ROW, size=22.6 K, encoding=NONE, compression=NONE, seqNum=64, earliestPutTs=1686077836223 2023-06-06 18:57:26,381 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.Compactor(207): Compacting 7d0fb3d8b4c84078ac790aab36f8ac91, keycount=13, bloomtype=ROW, size=18.4 K, encoding=NONE, compression=NONE, seqNum=81, earliestPutTs=1686077836253 2023-06-06 18:57:26,396 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] throttle.PressureAwareThroughputController(145): 2bbba3429b25e6edc94320062d822f80#info#compaction#31 average throughput is 21.89 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-06 18:57:26,415 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/.tmp/info/a75b5585e43741bda6d33d4f810b7452 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/a75b5585e43741bda6d33d4f810b7452 2023-06-06 18:57:26,421 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 2bbba3429b25e6edc94320062d822f80/info of 2bbba3429b25e6edc94320062d822f80 into a75b5585e43741bda6d33d4f810b7452(size=72.5 K), total size for store is 72.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-06 18:57:26,421 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 2bbba3429b25e6edc94320062d822f80: 2023-06-06 18:57:26,421 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80., storeName=2bbba3429b25e6edc94320062d822f80/info, priority=13, startTime=1686077846379; duration=0sec 2023-06-06 18:57:26,421 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=72.5 K, sizeToCheck=16.0 K 2023-06-06 18:57:26,421 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-06 18:57:26,423 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.CompactSplit(227): Splitting TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80., compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-06 18:57:26,423 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-06 18:57:26,424 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33223] assignment.AssignmentManager(1140): Split request from jenkins-hbase20.apache.org,43527,1686077822982, parent={ENCODED => 2bbba3429b25e6edc94320062d822f80, NAME => 'TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80.', STARTKEY => '', ENDKEY => ''} splitKey=row0062 2023-06-06 18:57:26,431 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33223] assignment.SplitTableRegionProcedure(219): Splittable=true state=OPEN, location=jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:57:26,436 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33223] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=2bbba3429b25e6edc94320062d822f80, daughterA=eb3ff360e401d921fe58a4b8c8476b44, daughterB=fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:57:26,437 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=2bbba3429b25e6edc94320062d822f80, daughterA=eb3ff360e401d921fe58a4b8c8476b44, daughterB=fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:57:26,437 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=2bbba3429b25e6edc94320062d822f80, daughterA=eb3ff360e401d921fe58a4b8c8476b44, daughterB=fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:57:26,437 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=2bbba3429b25e6edc94320062d822f80, daughterA=eb3ff360e401d921fe58a4b8c8476b44, daughterB=fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:57:26,446 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=2bbba3429b25e6edc94320062d822f80, UNASSIGN}] 2023-06-06 18:57:26,447 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=2bbba3429b25e6edc94320062d822f80, UNASSIGN 2023-06-06 18:57:26,449 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=2bbba3429b25e6edc94320062d822f80, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:57:26,449 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1686077846448"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686077846448"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686077846448"}]},"ts":"1686077846448"} 2023-06-06 18:57:26,451 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; CloseRegionProcedure 2bbba3429b25e6edc94320062d822f80, server=jenkins-hbase20.apache.org,43527,1686077822982}] 2023-06-06 18:57:26,612 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 2bbba3429b25e6edc94320062d822f80 2023-06-06 18:57:26,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 2bbba3429b25e6edc94320062d822f80, disabling compactions & flushes 2023-06-06 18:57:26,613 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80. 2023-06-06 18:57:26,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80. 2023-06-06 18:57:26,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80. after waiting 0 ms 2023-06-06 18:57:26,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80. 2023-06-06 18:57:26,625 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/9e21ad3f9ba44b49a23e138dc3af5fc7, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/7a411fa37e3e4cb289ca2c663383b7b8, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/055d3e8137544bd3b3955d2426975b51, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/97c551848df84ccbaf4ec2dedd921afa, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/6f69c7f374ea483b9c953ed588e62851, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/7d0fb3d8b4c84078ac790aab36f8ac91] to archive 2023-06-06 18:57:26,626 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-06 18:57:26,628 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/9e21ad3f9ba44b49a23e138dc3af5fc7 to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/9e21ad3f9ba44b49a23e138dc3af5fc7 2023-06-06 18:57:26,629 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/7a411fa37e3e4cb289ca2c663383b7b8 to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/7a411fa37e3e4cb289ca2c663383b7b8 2023-06-06 18:57:26,630 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/055d3e8137544bd3b3955d2426975b51 to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/055d3e8137544bd3b3955d2426975b51 2023-06-06 18:57:26,632 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/97c551848df84ccbaf4ec2dedd921afa to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/97c551848df84ccbaf4ec2dedd921afa 2023-06-06 18:57:26,633 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/6f69c7f374ea483b9c953ed588e62851 to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/6f69c7f374ea483b9c953ed588e62851 2023-06-06 18:57:26,634 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/7d0fb3d8b4c84078ac790aab36f8ac91 to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/7d0fb3d8b4c84078ac790aab36f8ac91 2023-06-06 18:57:26,640 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/recovered.edits/85.seqid, newMaxSeqId=85, maxSeqId=1 2023-06-06 18:57:26,641 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80. 2023-06-06 18:57:26,641 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 2bbba3429b25e6edc94320062d822f80: 2023-06-06 18:57:26,644 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 2bbba3429b25e6edc94320062d822f80 2023-06-06 18:57:26,644 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=2bbba3429b25e6edc94320062d822f80, regionState=CLOSED 2023-06-06 18:57:26,645 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1686077846644"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686077846644"}]},"ts":"1686077846644"} 2023-06-06 18:57:26,649 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-06-06 18:57:26,650 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; CloseRegionProcedure 2bbba3429b25e6edc94320062d822f80, server=jenkins-hbase20.apache.org,43527,1686077822982 in 195 msec 2023-06-06 18:57:26,652 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-06-06 18:57:26,652 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=2bbba3429b25e6edc94320062d822f80, UNASSIGN in 203 msec 2023-06-06 18:57:26,664 INFO [PEWorker-5] assignment.SplitTableRegionProcedure(694): pid=12 splitting 1 storefiles, region=2bbba3429b25e6edc94320062d822f80, threads=1 2023-06-06 18:57:26,665 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/a75b5585e43741bda6d33d4f810b7452 for region: 2bbba3429b25e6edc94320062d822f80 2023-06-06 18:57:26,702 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/a75b5585e43741bda6d33d4f810b7452 for region: 2bbba3429b25e6edc94320062d822f80 2023-06-06 18:57:26,702 DEBUG [PEWorker-5] assignment.SplitTableRegionProcedure(755): pid=12 split storefiles for region 2bbba3429b25e6edc94320062d822f80 Daughter A: 1 storefiles, Daughter B: 1 storefiles. 2023-06-06 18:57:26,725 DEBUG [PEWorker-5] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/eb3ff360e401d921fe58a4b8c8476b44/recovered.edits/85.seqid, newMaxSeqId=85, maxSeqId=-1 2023-06-06 18:57:26,727 DEBUG [PEWorker-5] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/recovered.edits/85.seqid, newMaxSeqId=85, maxSeqId=-1 2023-06-06 18:57:26,729 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1686077846729"},{"qualifier":"splitA","vlen":70,"tag":[],"timestamp":"1686077846729"},{"qualifier":"splitB","vlen":70,"tag":[],"timestamp":"1686077846729"}]},"ts":"1686077846729"} 2023-06-06 18:57:26,729 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1686077846431.eb3ff360e401d921fe58a4b8c8476b44.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1686077846729"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686077846729"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686077846729"}]},"ts":"1686077846729"} 2023-06-06 18:57:26,729 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1686077846729"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686077846729"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686077846729"}]},"ts":"1686077846729"} 2023-06-06 18:57:26,766 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43527] regionserver.HRegion(9158): Flush requested on 1588230740 2023-06-06 18:57:26,767 DEBUG [MemStoreFlusher.0] regionserver.FlushAllLargeStoresPolicy(69): Since none of the CFs were above the size, flushing all. 2023-06-06 18:57:26,767 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.82 KB heapSize=8.36 KB 2023-06-06 18:57:26,775 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=eb3ff360e401d921fe58a4b8c8476b44, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=fc3cee2da8965b24feceaad789cf1296, ASSIGN}] 2023-06-06 18:57:26,776 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=fc3cee2da8965b24feceaad789cf1296, ASSIGN 2023-06-06 18:57:26,776 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=eb3ff360e401d921fe58a4b8c8476b44, ASSIGN 2023-06-06 18:57:26,777 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.61 KB at sequenceid=17 (bloomFilter=false), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740/.tmp/info/6dd9ea0b74f14750b3ed1e9964fafeb6 2023-06-06 18:57:26,777 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=fc3cee2da8965b24feceaad789cf1296, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase20.apache.org,43527,1686077822982; forceNewPlan=false, retain=false 2023-06-06 18:57:26,778 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=eb3ff360e401d921fe58a4b8c8476b44, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase20.apache.org,43527,1686077822982; forceNewPlan=false, retain=false 2023-06-06 18:57:26,791 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=216 B at sequenceid=17 (bloomFilter=false), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740/.tmp/table/a3768d51e93446e4a79e2c673da44e35 2023-06-06 18:57:26,797 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740/.tmp/info/6dd9ea0b74f14750b3ed1e9964fafeb6 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740/info/6dd9ea0b74f14750b3ed1e9964fafeb6 2023-06-06 18:57:26,801 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740/info/6dd9ea0b74f14750b3ed1e9964fafeb6, entries=29, sequenceid=17, filesize=8.6 K 2023-06-06 18:57:26,802 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740/.tmp/table/a3768d51e93446e4a79e2c673da44e35 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740/table/a3768d51e93446e4a79e2c673da44e35 2023-06-06 18:57:26,808 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740/table/a3768d51e93446e4a79e2c673da44e35, entries=4, sequenceid=17, filesize=4.8 K 2023-06-06 18:57:26,809 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~4.82 KB/4939, heapSize ~8.08 KB/8272, currentSize=0 B/0 for 1588230740 in 42ms, sequenceid=17, compaction requested=false 2023-06-06 18:57:26,810 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-06-06 18:57:26,930 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=eb3ff360e401d921fe58a4b8c8476b44, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:57:26,930 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=fc3cee2da8965b24feceaad789cf1296, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:57:26,930 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1686077846431.eb3ff360e401d921fe58a4b8c8476b44.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1686077846930"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686077846930"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686077846930"}]},"ts":"1686077846930"} 2023-06-06 18:57:26,931 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1686077846930"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686077846930"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686077846930"}]},"ts":"1686077846930"} 2023-06-06 18:57:26,934 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE; OpenRegionProcedure eb3ff360e401d921fe58a4b8c8476b44, server=jenkins-hbase20.apache.org,43527,1686077822982}] 2023-06-06 18:57:26,936 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=16, state=RUNNABLE; OpenRegionProcedure fc3cee2da8965b24feceaad789cf1296, server=jenkins-hbase20.apache.org,43527,1686077822982}] 2023-06-06 18:57:27,096 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1686077846431.eb3ff360e401d921fe58a4b8c8476b44. 2023-06-06 18:57:27,096 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => eb3ff360e401d921fe58a4b8c8476b44, NAME => 'TestLogRolling-testLogRolling,,1686077846431.eb3ff360e401d921fe58a4b8c8476b44.', STARTKEY => '', ENDKEY => 'row0062'} 2023-06-06 18:57:27,096 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling eb3ff360e401d921fe58a4b8c8476b44 2023-06-06 18:57:27,096 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1686077846431.eb3ff360e401d921fe58a4b8c8476b44.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:57:27,096 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for eb3ff360e401d921fe58a4b8c8476b44 2023-06-06 18:57:27,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for eb3ff360e401d921fe58a4b8c8476b44 2023-06-06 18:57:27,098 INFO [StoreOpener-eb3ff360e401d921fe58a4b8c8476b44-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region eb3ff360e401d921fe58a4b8c8476b44 2023-06-06 18:57:27,100 DEBUG [StoreOpener-eb3ff360e401d921fe58a4b8c8476b44-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/eb3ff360e401d921fe58a4b8c8476b44/info 2023-06-06 18:57:27,100 DEBUG [StoreOpener-eb3ff360e401d921fe58a4b8c8476b44-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/eb3ff360e401d921fe58a4b8c8476b44/info 2023-06-06 18:57:27,100 INFO [StoreOpener-eb3ff360e401d921fe58a4b8c8476b44-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region eb3ff360e401d921fe58a4b8c8476b44 columnFamilyName info 2023-06-06 18:57:27,117 DEBUG [StoreOpener-eb3ff360e401d921fe58a4b8c8476b44-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/eb3ff360e401d921fe58a4b8c8476b44/info/a75b5585e43741bda6d33d4f810b7452.2bbba3429b25e6edc94320062d822f80->hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/a75b5585e43741bda6d33d4f810b7452-bottom 2023-06-06 18:57:27,118 INFO [StoreOpener-eb3ff360e401d921fe58a4b8c8476b44-1] regionserver.HStore(310): Store=eb3ff360e401d921fe58a4b8c8476b44/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:57:27,119 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/eb3ff360e401d921fe58a4b8c8476b44 2023-06-06 18:57:27,121 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/eb3ff360e401d921fe58a4b8c8476b44 2023-06-06 18:57:27,124 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for eb3ff360e401d921fe58a4b8c8476b44 2023-06-06 18:57:27,124 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened eb3ff360e401d921fe58a4b8c8476b44; next sequenceid=86; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=860144, jitterRate=0.0937301367521286}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-06 18:57:27,124 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for eb3ff360e401d921fe58a4b8c8476b44: 2023-06-06 18:57:27,125 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1686077846431.eb3ff360e401d921fe58a4b8c8476b44., pid=17, masterSystemTime=1686077847089 2023-06-06 18:57:27,125 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-06 18:57:27,126 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 1 store files, 0 compacting, 1 eligible, 16 blocking 2023-06-06 18:57:27,127 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,,1686077846431.eb3ff360e401d921fe58a4b8c8476b44. 2023-06-06 18:57:27,127 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1912): eb3ff360e401d921fe58a4b8c8476b44/info is initiating minor compaction (all files) 2023-06-06 18:57:27,127 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of eb3ff360e401d921fe58a4b8c8476b44/info in TestLogRolling-testLogRolling,,1686077846431.eb3ff360e401d921fe58a4b8c8476b44. 2023-06-06 18:57:27,127 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/eb3ff360e401d921fe58a4b8c8476b44/info/a75b5585e43741bda6d33d4f810b7452.2bbba3429b25e6edc94320062d822f80->hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/a75b5585e43741bda6d33d4f810b7452-bottom] into tmpdir=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/eb3ff360e401d921fe58a4b8c8476b44/.tmp, totalSize=72.5 K 2023-06-06 18:57:27,128 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.Compactor(207): Compacting a75b5585e43741bda6d33d4f810b7452.2bbba3429b25e6edc94320062d822f80, keycount=32, bloomtype=ROW, size=72.5 K, encoding=NONE, compression=NONE, seqNum=81, earliestPutTs=1686077834167 2023-06-06 18:57:27,128 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1686077846431.eb3ff360e401d921fe58a4b8c8476b44. 2023-06-06 18:57:27,128 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1686077846431.eb3ff360e401d921fe58a4b8c8476b44. 2023-06-06 18:57:27,128 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296. 2023-06-06 18:57:27,128 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fc3cee2da8965b24feceaad789cf1296, NAME => 'TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.', STARTKEY => 'row0062', ENDKEY => ''} 2023-06-06 18:57:27,129 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:57:27,129 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=eb3ff360e401d921fe58a4b8c8476b44, regionState=OPEN, openSeqNum=86, regionLocation=jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:57:27,129 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:57:27,129 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:57:27,129 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1686077846431.eb3ff360e401d921fe58a4b8c8476b44.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1686077847129"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686077847129"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686077847129"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686077847129"}]},"ts":"1686077847129"} 2023-06-06 18:57:27,129 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:57:27,130 INFO [StoreOpener-fc3cee2da8965b24feceaad789cf1296-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:57:27,131 DEBUG [StoreOpener-fc3cee2da8965b24feceaad789cf1296-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info 2023-06-06 18:57:27,131 DEBUG [StoreOpener-fc3cee2da8965b24feceaad789cf1296-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info 2023-06-06 18:57:27,132 INFO [StoreOpener-fc3cee2da8965b24feceaad789cf1296-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fc3cee2da8965b24feceaad789cf1296 columnFamilyName info 2023-06-06 18:57:27,133 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-06-06 18:57:27,133 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; OpenRegionProcedure eb3ff360e401d921fe58a4b8c8476b44, server=jenkins-hbase20.apache.org,43527,1686077822982 in 197 msec 2023-06-06 18:57:27,135 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=eb3ff360e401d921fe58a4b8c8476b44, ASSIGN in 358 msec 2023-06-06 18:57:27,135 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] throttle.PressureAwareThroughputController(145): eb3ff360e401d921fe58a4b8c8476b44#info#compaction#34 average throughput is 20.87 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-06 18:57:27,141 DEBUG [StoreOpener-fc3cee2da8965b24feceaad789cf1296-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/a75b5585e43741bda6d33d4f810b7452.2bbba3429b25e6edc94320062d822f80->hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/a75b5585e43741bda6d33d4f810b7452-top 2023-06-06 18:57:27,142 INFO [StoreOpener-fc3cee2da8965b24feceaad789cf1296-1] regionserver.HStore(310): Store=fc3cee2da8965b24feceaad789cf1296/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:57:27,147 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:57:27,149 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:57:27,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:57:27,153 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened fc3cee2da8965b24feceaad789cf1296; next sequenceid=86; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=795120, jitterRate=0.011048167943954468}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-06 18:57:27,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:57:27,154 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296., pid=18, masterSystemTime=1686077847089 2023-06-06 18:57:27,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-06-06 18:57:27,156 DEBUG [RS:0;jenkins-hbase20:43527-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 1 store files, 0 compacting, 1 eligible, 16 blocking 2023-06-06 18:57:27,157 INFO [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296. 2023-06-06 18:57:27,157 DEBUG [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.HStore(1912): fc3cee2da8965b24feceaad789cf1296/info is initiating minor compaction (all files) 2023-06-06 18:57:27,158 INFO [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.HRegion(2259): Starting compaction of fc3cee2da8965b24feceaad789cf1296/info in TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296. 2023-06-06 18:57:27,158 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/eb3ff360e401d921fe58a4b8c8476b44/.tmp/info/d1a4811c08ef4a299abd3ad47d15623e as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/eb3ff360e401d921fe58a4b8c8476b44/info/d1a4811c08ef4a299abd3ad47d15623e 2023-06-06 18:57:27,158 INFO [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/a75b5585e43741bda6d33d4f810b7452.2bbba3429b25e6edc94320062d822f80->hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/a75b5585e43741bda6d33d4f810b7452-top] into tmpdir=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp, totalSize=72.5 K 2023-06-06 18:57:27,158 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296. 2023-06-06 18:57:27,158 DEBUG [RS:0;jenkins-hbase20:43527-longCompactions-0] compactions.Compactor(207): Compacting a75b5585e43741bda6d33d4f810b7452.2bbba3429b25e6edc94320062d822f80, keycount=32, bloomtype=ROW, size=72.5 K, encoding=NONE, compression=NONE, seqNum=82, earliestPutTs=1686077834167 2023-06-06 18:57:27,158 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296. 2023-06-06 18:57:27,159 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=fc3cee2da8965b24feceaad789cf1296, regionState=OPEN, openSeqNum=86, regionLocation=jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:57:27,160 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1686077847159"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686077847159"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686077847159"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686077847159"}]},"ts":"1686077847159"} 2023-06-06 18:57:27,163 INFO [RS:0;jenkins-hbase20:43527-longCompactions-0] throttle.PressureAwareThroughputController(145): fc3cee2da8965b24feceaad789cf1296#info#compaction#35 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-06 18:57:27,164 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=16 2023-06-06 18:57:27,164 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=16, state=SUCCESS; OpenRegionProcedure fc3cee2da8965b24feceaad789cf1296, server=jenkins-hbase20.apache.org,43527,1686077822982 in 226 msec 2023-06-06 18:57:27,165 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 1 (all) file(s) in eb3ff360e401d921fe58a4b8c8476b44/info of eb3ff360e401d921fe58a4b8c8476b44 into d1a4811c08ef4a299abd3ad47d15623e(size=69.1 K), total size for store is 69.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-06 18:57:27,165 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for eb3ff360e401d921fe58a4b8c8476b44: 2023-06-06 18:57:27,165 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1686077846431.eb3ff360e401d921fe58a4b8c8476b44., storeName=eb3ff360e401d921fe58a4b8c8476b44/info, priority=15, startTime=1686077847125; duration=0sec 2023-06-06 18:57:27,166 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-06 18:57:27,166 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=12 2023-06-06 18:57:27,167 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=fc3cee2da8965b24feceaad789cf1296, ASSIGN in 389 msec 2023-06-06 18:57:27,168 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=2bbba3429b25e6edc94320062d822f80, daughterA=eb3ff360e401d921fe58a4b8c8476b44, daughterB=fc3cee2da8965b24feceaad789cf1296 in 736 msec 2023-06-06 18:57:27,180 DEBUG [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/0b5b07ec1fe6418e973ae5e7a0f49089 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/0b5b07ec1fe6418e973ae5e7a0f49089 2023-06-06 18:57:27,186 INFO [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.HStore(1652): Completed compaction of 1 (all) file(s) in fc3cee2da8965b24feceaad789cf1296/info of fc3cee2da8965b24feceaad789cf1296 into 0b5b07ec1fe6418e973ae5e7a0f49089(size=8.0 K), total size for store is 8.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-06 18:57:27,186 DEBUG [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:57:27,186 INFO [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296., storeName=fc3cee2da8965b24feceaad789cf1296/info, priority=15, startTime=1686077847154; duration=0sec 2023-06-06 18:57:27,186 DEBUG [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-06 18:57:28,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] ipc.CallRunner(144): callId: 75 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:44126 deadline: 1686077858348, exception=org.apache.hadoop.hbase.NotServingRegionException: TestLogRolling-testLogRolling,,1686077824144.2bbba3429b25e6edc94320062d822f80. is not online on jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:57:32,213 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-06 18:57:38,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] regionserver.HRegion(9158): Flush requested on fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:57:38,390 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fc3cee2da8965b24feceaad789cf1296 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-06 18:57:38,402 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=96 (bloomFilter=true), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/66f9b100ad1145dca31515d1da36076a 2023-06-06 18:57:38,409 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/66f9b100ad1145dca31515d1da36076a as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/66f9b100ad1145dca31515d1da36076a 2023-06-06 18:57:38,415 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/66f9b100ad1145dca31515d1da36076a, entries=7, sequenceid=96, filesize=12.1 K 2023-06-06 18:57:38,416 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=17.86 KB/18292 for fc3cee2da8965b24feceaad789cf1296 in 26ms, sequenceid=96, compaction requested=false 2023-06-06 18:57:38,416 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:57:38,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] regionserver.HRegion(9158): Flush requested on fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:57:38,417 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fc3cee2da8965b24feceaad789cf1296 1/1 column families, dataSize=18.91 KB heapSize=20.50 KB 2023-06-06 18:57:38,426 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=18.91 KB at sequenceid=117 (bloomFilter=true), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/9308194ba60c4aaeb79e7964de401d72 2023-06-06 18:57:38,431 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/9308194ba60c4aaeb79e7964de401d72 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/9308194ba60c4aaeb79e7964de401d72 2023-06-06 18:57:38,435 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/9308194ba60c4aaeb79e7964de401d72, entries=18, sequenceid=117, filesize=23.7 K 2023-06-06 18:57:38,436 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~18.91 KB/19368, heapSize ~20.48 KB/20976, currentSize=7.36 KB/7532 for fc3cee2da8965b24feceaad789cf1296 in 19ms, sequenceid=117, compaction requested=true 2023-06-06 18:57:38,436 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:57:38,436 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-06 18:57:38,436 DEBUG [RS:0;jenkins-hbase20:43527-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-06 18:57:38,437 DEBUG [RS:0;jenkins-hbase20:43527-longCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 44815 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-06 18:57:38,437 DEBUG [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.HStore(1912): fc3cee2da8965b24feceaad789cf1296/info is initiating minor compaction (all files) 2023-06-06 18:57:38,437 INFO [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.HRegion(2259): Starting compaction of fc3cee2da8965b24feceaad789cf1296/info in TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296. 2023-06-06 18:57:38,437 INFO [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/0b5b07ec1fe6418e973ae5e7a0f49089, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/66f9b100ad1145dca31515d1da36076a, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/9308194ba60c4aaeb79e7964de401d72] into tmpdir=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp, totalSize=43.8 K 2023-06-06 18:57:38,437 DEBUG [RS:0;jenkins-hbase20:43527-longCompactions-0] compactions.Compactor(207): Compacting 0b5b07ec1fe6418e973ae5e7a0f49089, keycount=3, bloomtype=ROW, size=8.0 K, encoding=NONE, compression=NONE, seqNum=82, earliestPutTs=1686077836279 2023-06-06 18:57:38,438 DEBUG [RS:0;jenkins-hbase20:43527-longCompactions-0] compactions.Compactor(207): Compacting 66f9b100ad1145dca31515d1da36076a, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=96, earliestPutTs=1686077858379 2023-06-06 18:57:38,438 DEBUG [RS:0;jenkins-hbase20:43527-longCompactions-0] compactions.Compactor(207): Compacting 9308194ba60c4aaeb79e7964de401d72, keycount=18, bloomtype=ROW, size=23.7 K, encoding=NONE, compression=NONE, seqNum=117, earliestPutTs=1686077858391 2023-06-06 18:57:38,447 INFO [RS:0;jenkins-hbase20:43527-longCompactions-0] throttle.PressureAwareThroughputController(145): fc3cee2da8965b24feceaad789cf1296#info#compaction#38 average throughput is 28.73 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-06 18:57:38,459 DEBUG [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/878dd8c046ee45a8ae1c4c697f9a65e1 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/878dd8c046ee45a8ae1c4c697f9a65e1 2023-06-06 18:57:38,464 INFO [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in fc3cee2da8965b24feceaad789cf1296/info of fc3cee2da8965b24feceaad789cf1296 into 878dd8c046ee45a8ae1c4c697f9a65e1(size=34.4 K), total size for store is 34.4 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-06 18:57:38,464 DEBUG [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:57:38,464 INFO [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296., storeName=fc3cee2da8965b24feceaad789cf1296/info, priority=13, startTime=1686077858436; duration=0sec 2023-06-06 18:57:38,464 DEBUG [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-06 18:57:40,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] regionserver.HRegion(9158): Flush requested on fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:57:40,428 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fc3cee2da8965b24feceaad789cf1296 1/1 column families, dataSize=8.41 KB heapSize=9.25 KB 2023-06-06 18:57:40,446 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=8.41 KB at sequenceid=129 (bloomFilter=true), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/a29a87b55b544ed9bbee7ec52d7e9151 2023-06-06 18:57:40,453 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/a29a87b55b544ed9bbee7ec52d7e9151 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/a29a87b55b544ed9bbee7ec52d7e9151 2023-06-06 18:57:40,460 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/a29a87b55b544ed9bbee7ec52d7e9151, entries=8, sequenceid=129, filesize=13.2 K 2023-06-06 18:57:40,462 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~8.41 KB/8608, heapSize ~9.23 KB/9456, currentSize=16.81 KB/17216 for fc3cee2da8965b24feceaad789cf1296 in 33ms, sequenceid=129, compaction requested=false 2023-06-06 18:57:40,462 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:57:40,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] regionserver.HRegion(9158): Flush requested on fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:57:40,462 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fc3cee2da8965b24feceaad789cf1296 1/1 column families, dataSize=17.86 KB heapSize=19.38 KB 2023-06-06 18:57:40,472 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=17.86 KB at sequenceid=149 (bloomFilter=true), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/861217bb7a3849498608f7b6c920709e 2023-06-06 18:57:40,477 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=fc3cee2da8965b24feceaad789cf1296, server=jenkins-hbase20.apache.org,43527,1686077822982 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-06 18:57:40,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] ipc.CallRunner(144): callId: 141 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:44126 deadline: 1686077870477, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=fc3cee2da8965b24feceaad789cf1296, server=jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:57:40,478 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/861217bb7a3849498608f7b6c920709e as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/861217bb7a3849498608f7b6c920709e 2023-06-06 18:57:40,482 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/861217bb7a3849498608f7b6c920709e, entries=17, sequenceid=149, filesize=22.7 K 2023-06-06 18:57:40,483 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~17.86 KB/18292, heapSize ~19.36 KB/19824, currentSize=12.61 KB/12912 for fc3cee2da8965b24feceaad789cf1296 in 21ms, sequenceid=149, compaction requested=true 2023-06-06 18:57:40,483 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:57:40,484 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-06-06 18:57:40,484 DEBUG [RS:0;jenkins-hbase20:43527-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-06 18:57:40,485 DEBUG [RS:0;jenkins-hbase20:43527-longCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 71922 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-06 18:57:40,485 DEBUG [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.HStore(1912): fc3cee2da8965b24feceaad789cf1296/info is initiating minor compaction (all files) 2023-06-06 18:57:40,485 INFO [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.HRegion(2259): Starting compaction of fc3cee2da8965b24feceaad789cf1296/info in TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296. 2023-06-06 18:57:40,485 INFO [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/878dd8c046ee45a8ae1c4c697f9a65e1, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/a29a87b55b544ed9bbee7ec52d7e9151, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/861217bb7a3849498608f7b6c920709e] into tmpdir=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp, totalSize=70.2 K 2023-06-06 18:57:40,486 DEBUG [RS:0;jenkins-hbase20:43527-longCompactions-0] compactions.Compactor(207): Compacting 878dd8c046ee45a8ae1c4c697f9a65e1, keycount=28, bloomtype=ROW, size=34.4 K, encoding=NONE, compression=NONE, seqNum=117, earliestPutTs=1686077836279 2023-06-06 18:57:40,486 DEBUG [RS:0;jenkins-hbase20:43527-longCompactions-0] compactions.Compactor(207): Compacting a29a87b55b544ed9bbee7ec52d7e9151, keycount=8, bloomtype=ROW, size=13.2 K, encoding=NONE, compression=NONE, seqNum=129, earliestPutTs=1686077858417 2023-06-06 18:57:40,486 DEBUG [RS:0;jenkins-hbase20:43527-longCompactions-0] compactions.Compactor(207): Compacting 861217bb7a3849498608f7b6c920709e, keycount=17, bloomtype=ROW, size=22.7 K, encoding=NONE, compression=NONE, seqNum=149, earliestPutTs=1686077860430 2023-06-06 18:57:40,500 INFO [RS:0;jenkins-hbase20:43527-longCompactions-0] throttle.PressureAwareThroughputController(145): fc3cee2da8965b24feceaad789cf1296#info#compaction#41 average throughput is 27.19 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-06 18:57:40,514 DEBUG [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/c8c3f15267854a67b12cfbf76ba5aae3 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/c8c3f15267854a67b12cfbf76ba5aae3 2023-06-06 18:57:40,520 INFO [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in fc3cee2da8965b24feceaad789cf1296/info of fc3cee2da8965b24feceaad789cf1296 into c8c3f15267854a67b12cfbf76ba5aae3(size=60.9 K), total size for store is 60.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-06 18:57:40,520 DEBUG [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:57:40,520 INFO [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296., storeName=fc3cee2da8965b24feceaad789cf1296/info, priority=13, startTime=1686077860484; duration=0sec 2023-06-06 18:57:40,520 DEBUG [RS:0;jenkins-hbase20:43527-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-06 18:57:48,700 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): data stats (chunk size=2097152): current pool size=2, created chunk count=13, reused chunk count=32, reuseRatio=71.11% 2023-06-06 18:57:48,700 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2023-06-06 18:57:50,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] regionserver.HRegion(9158): Flush requested on fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:57:50,580 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fc3cee2da8965b24feceaad789cf1296 1/1 column families, dataSize=13.66 KB heapSize=14.88 KB 2023-06-06 18:57:50,597 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=13.66 KB at sequenceid=166 (bloomFilter=true), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/e9d74a76c26f423ea4f750f96b9af738 2023-06-06 18:57:50,603 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/e9d74a76c26f423ea4f750f96b9af738 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/e9d74a76c26f423ea4f750f96b9af738 2023-06-06 18:57:50,609 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/e9d74a76c26f423ea4f750f96b9af738, entries=13, sequenceid=166, filesize=18.4 K 2023-06-06 18:57:50,610 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~13.66 KB/13988, heapSize ~14.86 KB/15216, currentSize=1.05 KB/1076 for fc3cee2da8965b24feceaad789cf1296 in 30ms, sequenceid=166, compaction requested=false 2023-06-06 18:57:50,610 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:57:52,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] regionserver.HRegion(9158): Flush requested on fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:57:52,596 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fc3cee2da8965b24feceaad789cf1296 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-06 18:57:52,616 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=176 (bloomFilter=true), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/0b9bfd3dad984b5aafffb8a9a5f8537f 2023-06-06 18:57:52,626 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/0b9bfd3dad984b5aafffb8a9a5f8537f as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/0b9bfd3dad984b5aafffb8a9a5f8537f 2023-06-06 18:57:52,633 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/0b9bfd3dad984b5aafffb8a9a5f8537f, entries=7, sequenceid=176, filesize=12.1 K 2023-06-06 18:57:52,634 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=15.76 KB/16140 for fc3cee2da8965b24feceaad789cf1296 in 38ms, sequenceid=176, compaction requested=true 2023-06-06 18:57:52,635 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:57:52,635 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-06 18:57:52,635 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-06 18:57:52,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] regionserver.HRegion(9158): Flush requested on fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:57:52,636 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fc3cee2da8965b24feceaad789cf1296 1/1 column families, dataSize=17.86 KB heapSize=19.38 KB 2023-06-06 18:57:52,636 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 93652 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-06 18:57:52,636 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1912): fc3cee2da8965b24feceaad789cf1296/info is initiating minor compaction (all files) 2023-06-06 18:57:52,636 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of fc3cee2da8965b24feceaad789cf1296/info in TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296. 2023-06-06 18:57:52,636 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/c8c3f15267854a67b12cfbf76ba5aae3, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/e9d74a76c26f423ea4f750f96b9af738, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/0b9bfd3dad984b5aafffb8a9a5f8537f] into tmpdir=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp, totalSize=91.5 K 2023-06-06 18:57:52,637 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.Compactor(207): Compacting c8c3f15267854a67b12cfbf76ba5aae3, keycount=53, bloomtype=ROW, size=60.9 K, encoding=NONE, compression=NONE, seqNum=149, earliestPutTs=1686077836279 2023-06-06 18:57:52,638 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.Compactor(207): Compacting e9d74a76c26f423ea4f750f96b9af738, keycount=13, bloomtype=ROW, size=18.4 K, encoding=NONE, compression=NONE, seqNum=166, earliestPutTs=1686077860463 2023-06-06 18:57:52,638 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.Compactor(207): Compacting 0b9bfd3dad984b5aafffb8a9a5f8537f, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=176, earliestPutTs=1686077870582 2023-06-06 18:57:52,670 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] throttle.PressureAwareThroughputController(145): fc3cee2da8965b24feceaad789cf1296#info#compaction#45 average throughput is 24.97 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-06 18:57:52,670 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=17.86 KB at sequenceid=196 (bloomFilter=true), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/0a2cb37d95e047a9bac8c2e40b44d0af 2023-06-06 18:57:52,678 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/0a2cb37d95e047a9bac8c2e40b44d0af as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/0a2cb37d95e047a9bac8c2e40b44d0af 2023-06-06 18:57:52,685 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/0a2cb37d95e047a9bac8c2e40b44d0af, entries=17, sequenceid=196, filesize=22.7 K 2023-06-06 18:57:52,686 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~17.86 KB/18292, heapSize ~19.36 KB/19824, currentSize=9.46 KB/9684 for fc3cee2da8965b24feceaad789cf1296 in 50ms, sequenceid=196, compaction requested=false 2023-06-06 18:57:52,686 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:57:52,696 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/1721cc233c7845249cdc063a73d452bd as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/1721cc233c7845249cdc063a73d452bd 2023-06-06 18:57:52,702 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in fc3cee2da8965b24feceaad789cf1296/info of fc3cee2da8965b24feceaad789cf1296 into 1721cc233c7845249cdc063a73d452bd(size=82.1 K), total size for store is 104.8 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-06 18:57:52,702 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:57:52,702 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296., storeName=fc3cee2da8965b24feceaad789cf1296/info, priority=13, startTime=1686077872635; duration=0sec 2023-06-06 18:57:52,702 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-06 18:57:54,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] regionserver.HRegion(9158): Flush requested on fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:57:54,651 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fc3cee2da8965b24feceaad789cf1296 1/1 column families, dataSize=10.51 KB heapSize=11.50 KB 2023-06-06 18:57:54,663 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.51 KB at sequenceid=210 (bloomFilter=true), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/6e416ae560b94728a794bcbf97174a53 2023-06-06 18:57:54,671 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/6e416ae560b94728a794bcbf97174a53 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/6e416ae560b94728a794bcbf97174a53 2023-06-06 18:57:54,677 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/6e416ae560b94728a794bcbf97174a53, entries=10, sequenceid=210, filesize=15.3 K 2023-06-06 18:57:54,678 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.51 KB/10760, heapSize ~11.48 KB/11760, currentSize=18.91 KB/19368 for fc3cee2da8965b24feceaad789cf1296 in 27ms, sequenceid=210, compaction requested=true 2023-06-06 18:57:54,678 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:57:54,678 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-06 18:57:54,678 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-06 18:57:54,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] regionserver.HRegion(9158): Flush requested on fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:57:54,679 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fc3cee2da8965b24feceaad789cf1296 1/1 column families, dataSize=19.96 KB heapSize=21.63 KB 2023-06-06 18:57:54,679 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 122937 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-06 18:57:54,679 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1912): fc3cee2da8965b24feceaad789cf1296/info is initiating minor compaction (all files) 2023-06-06 18:57:54,679 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of fc3cee2da8965b24feceaad789cf1296/info in TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296. 2023-06-06 18:57:54,679 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/1721cc233c7845249cdc063a73d452bd, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/0a2cb37d95e047a9bac8c2e40b44d0af, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/6e416ae560b94728a794bcbf97174a53] into tmpdir=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp, totalSize=120.1 K 2023-06-06 18:57:54,680 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.Compactor(207): Compacting 1721cc233c7845249cdc063a73d452bd, keycount=73, bloomtype=ROW, size=82.1 K, encoding=NONE, compression=NONE, seqNum=176, earliestPutTs=1686077836279 2023-06-06 18:57:54,680 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.Compactor(207): Compacting 0a2cb37d95e047a9bac8c2e40b44d0af, keycount=17, bloomtype=ROW, size=22.7 K, encoding=NONE, compression=NONE, seqNum=196, earliestPutTs=1686077872597 2023-06-06 18:57:54,681 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.Compactor(207): Compacting 6e416ae560b94728a794bcbf97174a53, keycount=10, bloomtype=ROW, size=15.3 K, encoding=NONE, compression=NONE, seqNum=210, earliestPutTs=1686077872636 2023-06-06 18:57:54,693 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=19.96 KB at sequenceid=232 (bloomFilter=true), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/753e4f63c8fb4885a522996000aec5b1 2023-06-06 18:57:54,696 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=fc3cee2da8965b24feceaad789cf1296, server=jenkins-hbase20.apache.org,43527,1686077822982 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-06 18:57:54,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] ipc.CallRunner(144): callId: 207 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:44126 deadline: 1686077884696, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=fc3cee2da8965b24feceaad789cf1296, server=jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:57:54,698 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] throttle.PressureAwareThroughputController(145): fc3cee2da8965b24feceaad789cf1296#info#compaction#48 average throughput is 51.31 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-06 18:57:54,700 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/753e4f63c8fb4885a522996000aec5b1 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/753e4f63c8fb4885a522996000aec5b1 2023-06-06 18:57:54,705 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/753e4f63c8fb4885a522996000aec5b1, entries=19, sequenceid=232, filesize=24.8 K 2023-06-06 18:57:54,708 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~19.96 KB/20444, heapSize ~21.61 KB/22128, currentSize=10.51 KB/10760 for fc3cee2da8965b24feceaad789cf1296 in 29ms, sequenceid=232, compaction requested=false 2023-06-06 18:57:54,708 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:57:54,714 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/12371438138a4354918703122639ecf2 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/12371438138a4354918703122639ecf2 2023-06-06 18:57:54,720 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in fc3cee2da8965b24feceaad789cf1296/info of fc3cee2da8965b24feceaad789cf1296 into 12371438138a4354918703122639ecf2(size=110.7 K), total size for store is 135.4 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-06 18:57:54,721 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:57:54,721 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296., storeName=fc3cee2da8965b24feceaad789cf1296/info, priority=13, startTime=1686077874678; duration=0sec 2023-06-06 18:57:54,721 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-06 18:57:55,605 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-06 18:58:04,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] regionserver.HRegion(9158): Flush requested on fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:58:04,722 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fc3cee2da8965b24feceaad789cf1296 1/1 column families, dataSize=11.56 KB heapSize=12.63 KB 2023-06-06 18:58:04,734 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=11.56 KB at sequenceid=247 (bloomFilter=true), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/f6aa7c42166b41ff9c91875276dd0950 2023-06-06 18:58:04,741 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/f6aa7c42166b41ff9c91875276dd0950 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/f6aa7c42166b41ff9c91875276dd0950 2023-06-06 18:58:04,746 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/f6aa7c42166b41ff9c91875276dd0950, entries=11, sequenceid=247, filesize=16.3 K 2023-06-06 18:58:04,747 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~11.56 KB/11836, heapSize ~12.61 KB/12912, currentSize=1.05 KB/1076 for fc3cee2da8965b24feceaad789cf1296 in 25ms, sequenceid=247, compaction requested=true 2023-06-06 18:58:04,747 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:58:04,747 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-06-06 18:58:04,747 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-06 18:58:04,748 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 155387 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-06 18:58:04,749 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1912): fc3cee2da8965b24feceaad789cf1296/info is initiating minor compaction (all files) 2023-06-06 18:58:04,749 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of fc3cee2da8965b24feceaad789cf1296/info in TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296. 2023-06-06 18:58:04,749 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/12371438138a4354918703122639ecf2, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/753e4f63c8fb4885a522996000aec5b1, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/f6aa7c42166b41ff9c91875276dd0950] into tmpdir=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp, totalSize=151.7 K 2023-06-06 18:58:04,749 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.Compactor(207): Compacting 12371438138a4354918703122639ecf2, keycount=100, bloomtype=ROW, size=110.7 K, encoding=NONE, compression=NONE, seqNum=210, earliestPutTs=1686077836279 2023-06-06 18:58:04,749 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.Compactor(207): Compacting 753e4f63c8fb4885a522996000aec5b1, keycount=19, bloomtype=ROW, size=24.8 K, encoding=NONE, compression=NONE, seqNum=232, earliestPutTs=1686077874652 2023-06-06 18:58:04,750 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.Compactor(207): Compacting f6aa7c42166b41ff9c91875276dd0950, keycount=11, bloomtype=ROW, size=16.3 K, encoding=NONE, compression=NONE, seqNum=247, earliestPutTs=1686077874679 2023-06-06 18:58:04,760 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] throttle.PressureAwareThroughputController(145): fc3cee2da8965b24feceaad789cf1296#info#compaction#50 average throughput is 44.47 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-06 18:58:04,773 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/670e416c87e7421d8acd3e33913d9226 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/670e416c87e7421d8acd3e33913d9226 2023-06-06 18:58:04,778 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in fc3cee2da8965b24feceaad789cf1296/info of fc3cee2da8965b24feceaad789cf1296 into 670e416c87e7421d8acd3e33913d9226(size=142.5 K), total size for store is 142.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-06 18:58:04,778 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:58:04,778 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296., storeName=fc3cee2da8965b24feceaad789cf1296/info, priority=13, startTime=1686077884747; duration=0sec 2023-06-06 18:58:04,778 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-06 18:58:06,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] regionserver.HRegion(9158): Flush requested on fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:58:06,736 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fc3cee2da8965b24feceaad789cf1296 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-06 18:58:06,752 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=258 (bloomFilter=true), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/e4264710bb22445cbfea8ed5307400ef 2023-06-06 18:58:06,757 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/e4264710bb22445cbfea8ed5307400ef as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/e4264710bb22445cbfea8ed5307400ef 2023-06-06 18:58:06,765 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/e4264710bb22445cbfea8ed5307400ef, entries=7, sequenceid=258, filesize=12.1 K 2023-06-06 18:58:06,766 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=23.12 KB/23672 for fc3cee2da8965b24feceaad789cf1296 in 30ms, sequenceid=258, compaction requested=false 2023-06-06 18:58:06,766 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:58:06,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] regionserver.HRegion(9158): Flush requested on fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:58:06,766 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fc3cee2da8965b24feceaad789cf1296 1/1 column families, dataSize=24.17 KB heapSize=26.13 KB 2023-06-06 18:58:06,776 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.17 KB at sequenceid=284 (bloomFilter=true), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/689ea7fbfb7f48dab0c833effb640c09 2023-06-06 18:58:06,782 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/689ea7fbfb7f48dab0c833effb640c09 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/689ea7fbfb7f48dab0c833effb640c09 2023-06-06 18:58:06,786 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/689ea7fbfb7f48dab0c833effb640c09, entries=23, sequenceid=284, filesize=29.0 K 2023-06-06 18:58:06,787 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~24.17 KB/24748, heapSize ~26.11 KB/26736, currentSize=3.15 KB/3228 for fc3cee2da8965b24feceaad789cf1296 in 21ms, sequenceid=284, compaction requested=true 2023-06-06 18:58:06,787 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:58:06,787 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-06 18:58:06,788 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-06 18:58:06,789 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 188059 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-06 18:58:06,789 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1912): fc3cee2da8965b24feceaad789cf1296/info is initiating minor compaction (all files) 2023-06-06 18:58:06,789 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of fc3cee2da8965b24feceaad789cf1296/info in TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296. 2023-06-06 18:58:06,789 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/670e416c87e7421d8acd3e33913d9226, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/e4264710bb22445cbfea8ed5307400ef, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/689ea7fbfb7f48dab0c833effb640c09] into tmpdir=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp, totalSize=183.7 K 2023-06-06 18:58:06,789 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.Compactor(207): Compacting 670e416c87e7421d8acd3e33913d9226, keycount=130, bloomtype=ROW, size=142.5 K, encoding=NONE, compression=NONE, seqNum=247, earliestPutTs=1686077836279 2023-06-06 18:58:06,790 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.Compactor(207): Compacting e4264710bb22445cbfea8ed5307400ef, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=258, earliestPutTs=1686077884723 2023-06-06 18:58:06,790 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.Compactor(207): Compacting 689ea7fbfb7f48dab0c833effb640c09, keycount=23, bloomtype=ROW, size=29.0 K, encoding=NONE, compression=NONE, seqNum=284, earliestPutTs=1686077886737 2023-06-06 18:58:06,802 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] throttle.PressureAwareThroughputController(145): fc3cee2da8965b24feceaad789cf1296#info#compaction#53 average throughput is 82.09 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-06 18:58:06,816 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/e387dcdf341a4e20bf8f114cf6516e24 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/e387dcdf341a4e20bf8f114cf6516e24 2023-06-06 18:58:06,821 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in fc3cee2da8965b24feceaad789cf1296/info of fc3cee2da8965b24feceaad789cf1296 into e387dcdf341a4e20bf8f114cf6516e24(size=174.2 K), total size for store is 174.2 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-06 18:58:06,821 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:58:06,822 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296., storeName=fc3cee2da8965b24feceaad789cf1296/info, priority=13, startTime=1686077886787; duration=0sec 2023-06-06 18:58:06,822 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-06 18:58:08,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] regionserver.HRegion(9158): Flush requested on fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:58:08,781 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fc3cee2da8965b24feceaad789cf1296 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-06 18:58:08,792 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=295 (bloomFilter=true), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/5057955b211b4f7896f8203e5507a034 2023-06-06 18:58:08,798 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/5057955b211b4f7896f8203e5507a034 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/5057955b211b4f7896f8203e5507a034 2023-06-06 18:58:08,804 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/5057955b211b4f7896f8203e5507a034, entries=7, sequenceid=295, filesize=12.1 K 2023-06-06 18:58:08,805 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=18.91 KB/19368 for fc3cee2da8965b24feceaad789cf1296 in 25ms, sequenceid=295, compaction requested=false 2023-06-06 18:58:08,805 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:58:08,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43527] regionserver.HRegion(9158): Flush requested on fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:58:08,806 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fc3cee2da8965b24feceaad789cf1296 1/1 column families, dataSize=19.96 KB heapSize=21.63 KB 2023-06-06 18:58:08,816 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=19.96 KB at sequenceid=317 (bloomFilter=true), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/2f43da31db904f338da9a3e7749dda3e 2023-06-06 18:58:08,822 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/2f43da31db904f338da9a3e7749dda3e as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/2f43da31db904f338da9a3e7749dda3e 2023-06-06 18:58:08,827 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/2f43da31db904f338da9a3e7749dda3e, entries=19, sequenceid=317, filesize=24.8 K 2023-06-06 18:58:08,827 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~19.96 KB/20444, heapSize ~21.61 KB/22128, currentSize=9.46 KB/9684 for fc3cee2da8965b24feceaad789cf1296 in 21ms, sequenceid=317, compaction requested=true 2023-06-06 18:58:08,828 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:58:08,828 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-06-06 18:58:08,828 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-06 18:58:08,829 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 216223 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-06 18:58:08,829 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1912): fc3cee2da8965b24feceaad789cf1296/info is initiating minor compaction (all files) 2023-06-06 18:58:08,829 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of fc3cee2da8965b24feceaad789cf1296/info in TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296. 2023-06-06 18:58:08,829 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/e387dcdf341a4e20bf8f114cf6516e24, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/5057955b211b4f7896f8203e5507a034, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/2f43da31db904f338da9a3e7749dda3e] into tmpdir=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp, totalSize=211.2 K 2023-06-06 18:58:08,829 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.Compactor(207): Compacting e387dcdf341a4e20bf8f114cf6516e24, keycount=160, bloomtype=ROW, size=174.2 K, encoding=NONE, compression=NONE, seqNum=284, earliestPutTs=1686077836279 2023-06-06 18:58:08,830 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.Compactor(207): Compacting 5057955b211b4f7896f8203e5507a034, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=295, earliestPutTs=1686077886767 2023-06-06 18:58:08,830 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] compactions.Compactor(207): Compacting 2f43da31db904f338da9a3e7749dda3e, keycount=19, bloomtype=ROW, size=24.8 K, encoding=NONE, compression=NONE, seqNum=317, earliestPutTs=1686077888781 2023-06-06 18:58:08,842 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] throttle.PressureAwareThroughputController(145): fc3cee2da8965b24feceaad789cf1296#info#compaction#56 average throughput is 95.43 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-06 18:58:08,855 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/6a53b282ced8414ca8bbb92cf22e30e2 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/6a53b282ced8414ca8bbb92cf22e30e2 2023-06-06 18:58:08,861 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in fc3cee2da8965b24feceaad789cf1296/info of fc3cee2da8965b24feceaad789cf1296 into 6a53b282ced8414ca8bbb92cf22e30e2(size=201.8 K), total size for store is 201.8 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-06 18:58:08,861 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:58:08,861 INFO [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296., storeName=fc3cee2da8965b24feceaad789cf1296/info, priority=13, startTime=1686077888828; duration=0sec 2023-06-06 18:58:08,861 DEBUG [RS:0;jenkins-hbase20:43527-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-06 18:58:10,818 INFO [Listener at localhost.localdomain/32863] wal.AbstractTestLogRolling(188): after writing there are 0 log files 2023-06-06 18:58:11,029 INFO [Listener at localhost.localdomain/32863] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/WALs/jenkins-hbase20.apache.org,43527,1686077822982/jenkins-hbase20.apache.org%2C43527%2C1686077822982.1686077823364 with entries=312, filesize=307.75 KB; new WAL /user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/WALs/jenkins-hbase20.apache.org,43527,1686077822982/jenkins-hbase20.apache.org%2C43527%2C1686077822982.1686077890819 2023-06-06 18:58:11,030 DEBUG [Listener at localhost.localdomain/32863] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45475,DS-8a7df69f-e623-48df-a3b3-5647280cfbe3,DISK], DatanodeInfoWithStorage[127.0.0.1:32913,DS-8edfa759-99c5-4566-a73c-9e06560934ca,DISK]] 2023-06-06 18:58:11,030 DEBUG [Listener at localhost.localdomain/32863] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/WALs/jenkins-hbase20.apache.org,43527,1686077822982/jenkins-hbase20.apache.org%2C43527%2C1686077822982.1686077823364 is not closed yet, will try archiving it next time 2023-06-06 18:58:11,037 INFO [Listener at localhost.localdomain/32863] regionserver.HRegion(2745): Flushing 2b29e6e13d59edb5c32c367408459dc7 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-06 18:58:11,045 INFO [Listener at localhost.localdomain/32863] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/namespace/2b29e6e13d59edb5c32c367408459dc7/.tmp/info/9304ed15c1404cfba30c57a467998adf 2023-06-06 18:58:11,051 DEBUG [Listener at localhost.localdomain/32863] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/namespace/2b29e6e13d59edb5c32c367408459dc7/.tmp/info/9304ed15c1404cfba30c57a467998adf as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/namespace/2b29e6e13d59edb5c32c367408459dc7/info/9304ed15c1404cfba30c57a467998adf 2023-06-06 18:58:11,056 INFO [Listener at localhost.localdomain/32863] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/namespace/2b29e6e13d59edb5c32c367408459dc7/info/9304ed15c1404cfba30c57a467998adf, entries=2, sequenceid=6, filesize=4.8 K 2023-06-06 18:58:11,057 INFO [Listener at localhost.localdomain/32863] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 2b29e6e13d59edb5c32c367408459dc7 in 21ms, sequenceid=6, compaction requested=false 2023-06-06 18:58:11,058 DEBUG [Listener at localhost.localdomain/32863] regionserver.HRegion(2446): Flush status journal for 2b29e6e13d59edb5c32c367408459dc7: 2023-06-06 18:58:11,058 INFO [Listener at localhost.localdomain/32863] regionserver.HRegion(2745): Flushing fc3cee2da8965b24feceaad789cf1296 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-06-06 18:58:11,070 INFO [Listener at localhost.localdomain/32863] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=330 (bloomFilter=true), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/e64397e235674ed4a04cab133393a543 2023-06-06 18:58:11,075 DEBUG [Listener at localhost.localdomain/32863] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/.tmp/info/e64397e235674ed4a04cab133393a543 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/e64397e235674ed4a04cab133393a543 2023-06-06 18:58:11,080 INFO [Listener at localhost.localdomain/32863] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/e64397e235674ed4a04cab133393a543, entries=9, sequenceid=330, filesize=14.2 K 2023-06-06 18:58:11,081 INFO [Listener at localhost.localdomain/32863] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=0 B/0 for fc3cee2da8965b24feceaad789cf1296 in 23ms, sequenceid=330, compaction requested=false 2023-06-06 18:58:11,081 DEBUG [Listener at localhost.localdomain/32863] regionserver.HRegion(2446): Flush status journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:58:11,081 INFO [Listener at localhost.localdomain/32863] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.26 KB heapSize=4.19 KB 2023-06-06 18:58:11,092 INFO [Listener at localhost.localdomain/32863] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.26 KB at sequenceid=24 (bloomFilter=false), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740/.tmp/info/77f0fb92dcca414f9bf4b6cf23e8d64d 2023-06-06 18:58:11,099 DEBUG [Listener at localhost.localdomain/32863] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740/.tmp/info/77f0fb92dcca414f9bf4b6cf23e8d64d as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740/info/77f0fb92dcca414f9bf4b6cf23e8d64d 2023-06-06 18:58:11,105 INFO [Listener at localhost.localdomain/32863] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740/info/77f0fb92dcca414f9bf4b6cf23e8d64d, entries=16, sequenceid=24, filesize=7.0 K 2023-06-06 18:58:11,106 INFO [Listener at localhost.localdomain/32863] regionserver.HRegion(2948): Finished flush of dataSize ~2.26 KB/2316, heapSize ~3.67 KB/3760, currentSize=0 B/0 for 1588230740 in 25ms, sequenceid=24, compaction requested=false 2023-06-06 18:58:11,106 DEBUG [Listener at localhost.localdomain/32863] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-06-06 18:58:11,106 DEBUG [Listener at localhost.localdomain/32863] regionserver.HRegion(2446): Flush status journal for eb3ff360e401d921fe58a4b8c8476b44: 2023-06-06 18:58:11,116 INFO [Listener at localhost.localdomain/32863] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/WALs/jenkins-hbase20.apache.org,43527,1686077822982/jenkins-hbase20.apache.org%2C43527%2C1686077822982.1686077890819 with entries=4, filesize=1.22 KB; new WAL /user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/WALs/jenkins-hbase20.apache.org,43527,1686077822982/jenkins-hbase20.apache.org%2C43527%2C1686077822982.1686077891106 2023-06-06 18:58:11,116 DEBUG [Listener at localhost.localdomain/32863] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32913,DS-8edfa759-99c5-4566-a73c-9e06560934ca,DISK], DatanodeInfoWithStorage[127.0.0.1:45475,DS-8a7df69f-e623-48df-a3b3-5647280cfbe3,DISK]] 2023-06-06 18:58:11,117 DEBUG [Listener at localhost.localdomain/32863] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/WALs/jenkins-hbase20.apache.org,43527,1686077822982/jenkins-hbase20.apache.org%2C43527%2C1686077822982.1686077890819 is not closed yet, will try archiving it next time 2023-06-06 18:58:11,117 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/WALs/jenkins-hbase20.apache.org,43527,1686077822982/jenkins-hbase20.apache.org%2C43527%2C1686077822982.1686077823364 to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/oldWALs/jenkins-hbase20.apache.org%2C43527%2C1686077822982.1686077823364 2023-06-06 18:58:11,118 INFO [Listener at localhost.localdomain/32863] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-06-06 18:58:11,120 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/WALs/jenkins-hbase20.apache.org,43527,1686077822982/jenkins-hbase20.apache.org%2C43527%2C1686077822982.1686077890819 to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/oldWALs/jenkins-hbase20.apache.org%2C43527%2C1686077822982.1686077890819 2023-06-06 18:58:11,219 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-06 18:58:11,220 INFO [Listener at localhost.localdomain/32863] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-06-06 18:58:11,220 DEBUG [Listener at localhost.localdomain/32863] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x465a1f43 to 127.0.0.1:55735 2023-06-06 18:58:11,220 DEBUG [Listener at localhost.localdomain/32863] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:58:11,220 DEBUG [Listener at localhost.localdomain/32863] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-06 18:58:11,220 DEBUG [Listener at localhost.localdomain/32863] util.JVMClusterUtil(257): Found active master hash=2052150187, stopped=false 2023-06-06 18:58:11,220 INFO [Listener at localhost.localdomain/32863] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,33223,1686077822941 2023-06-06 18:58:11,223 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): regionserver:43527-0x101c1c7eff50001, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-06 18:58:11,223 INFO [Listener at localhost.localdomain/32863] procedure2.ProcedureExecutor(629): Stopping 2023-06-06 18:58:11,223 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-06 18:58:11,223 DEBUG [Listener at localhost.localdomain/32863] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7c4daf0f to 127.0.0.1:55735 2023-06-06 18:58:11,225 DEBUG [Listener at localhost.localdomain/32863] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:58:11,223 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:58:11,225 INFO [Listener at localhost.localdomain/32863] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,43527,1686077822982' ***** 2023-06-06 18:58:11,225 INFO [Listener at localhost.localdomain/32863] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-06 18:58:11,226 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43527-0x101c1c7eff50001, quorum=127.0.0.1:55735, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:58:11,226 INFO [RS:0;jenkins-hbase20:43527] regionserver.HeapMemoryManager(220): Stopping 2023-06-06 18:58:11,226 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:58:11,226 INFO [RS:0;jenkins-hbase20:43527] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-06 18:58:11,226 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-06 18:58:11,227 INFO [RS:0;jenkins-hbase20:43527] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-06 18:58:11,227 INFO [RS:0;jenkins-hbase20:43527] regionserver.HRegionServer(3303): Received CLOSE for 2b29e6e13d59edb5c32c367408459dc7 2023-06-06 18:58:11,227 INFO [RS:0;jenkins-hbase20:43527] regionserver.HRegionServer(3303): Received CLOSE for fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:58:11,227 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 2b29e6e13d59edb5c32c367408459dc7, disabling compactions & flushes 2023-06-06 18:58:11,227 INFO [RS:0;jenkins-hbase20:43527] regionserver.HRegionServer(3303): Received CLOSE for eb3ff360e401d921fe58a4b8c8476b44 2023-06-06 18:58:11,228 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686077823581.2b29e6e13d59edb5c32c367408459dc7. 2023-06-06 18:58:11,228 INFO [RS:0;jenkins-hbase20:43527] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:58:11,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686077823581.2b29e6e13d59edb5c32c367408459dc7. 2023-06-06 18:58:11,228 DEBUG [RS:0;jenkins-hbase20:43527] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x12fe4ff4 to 127.0.0.1:55735 2023-06-06 18:58:11,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686077823581.2b29e6e13d59edb5c32c367408459dc7. after waiting 0 ms 2023-06-06 18:58:11,228 DEBUG [RS:0;jenkins-hbase20:43527] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:58:11,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686077823581.2b29e6e13d59edb5c32c367408459dc7. 2023-06-06 18:58:11,229 INFO [RS:0;jenkins-hbase20:43527] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-06 18:58:11,229 INFO [RS:0;jenkins-hbase20:43527] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-06 18:58:11,229 INFO [RS:0;jenkins-hbase20:43527] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-06 18:58:11,229 INFO [RS:0;jenkins-hbase20:43527] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-06 18:58:11,230 INFO [RS:0;jenkins-hbase20:43527] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-06-06 18:58:11,231 DEBUG [RS:0;jenkins-hbase20:43527] regionserver.HRegionServer(1478): Online Regions={2b29e6e13d59edb5c32c367408459dc7=hbase:namespace,,1686077823581.2b29e6e13d59edb5c32c367408459dc7., fc3cee2da8965b24feceaad789cf1296=TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296., 1588230740=hbase:meta,,1.1588230740, eb3ff360e401d921fe58a4b8c8476b44=TestLogRolling-testLogRolling,,1686077846431.eb3ff360e401d921fe58a4b8c8476b44.} 2023-06-06 18:58:11,231 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-06 18:58:11,231 DEBUG [RS:0;jenkins-hbase20:43527] regionserver.HRegionServer(1504): Waiting on 1588230740, 2b29e6e13d59edb5c32c367408459dc7, eb3ff360e401d921fe58a4b8c8476b44, fc3cee2da8965b24feceaad789cf1296 2023-06-06 18:58:11,231 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-06 18:58:11,232 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-06 18:58:11,232 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-06 18:58:11,233 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-06 18:58:11,239 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/namespace/2b29e6e13d59edb5c32c367408459dc7/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-06 18:58:11,240 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686077823581.2b29e6e13d59edb5c32c367408459dc7. 2023-06-06 18:58:11,240 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 2b29e6e13d59edb5c32c367408459dc7: 2023-06-06 18:58:11,240 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1686077823581.2b29e6e13d59edb5c32c367408459dc7. 2023-06-06 18:58:11,240 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/hbase/meta/1588230740/recovered.edits/27.seqid, newMaxSeqId=27, maxSeqId=1 2023-06-06 18:58:11,240 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing fc3cee2da8965b24feceaad789cf1296, disabling compactions & flushes 2023-06-06 18:58:11,240 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296. 2023-06-06 18:58:11,240 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296. 2023-06-06 18:58:11,240 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296. after waiting 0 ms 2023-06-06 18:58:11,240 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296. 2023-06-06 18:58:11,243 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-06 18:58:11,245 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-06 18:58:11,247 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-06 18:58:11,248 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-06 18:58:11,250 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-06 18:58:11,258 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/a75b5585e43741bda6d33d4f810b7452.2bbba3429b25e6edc94320062d822f80->hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/a75b5585e43741bda6d33d4f810b7452-top, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/0b5b07ec1fe6418e973ae5e7a0f49089, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/66f9b100ad1145dca31515d1da36076a, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/878dd8c046ee45a8ae1c4c697f9a65e1, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/9308194ba60c4aaeb79e7964de401d72, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/a29a87b55b544ed9bbee7ec52d7e9151, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/c8c3f15267854a67b12cfbf76ba5aae3, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/861217bb7a3849498608f7b6c920709e, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/e9d74a76c26f423ea4f750f96b9af738, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/1721cc233c7845249cdc063a73d452bd, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/0b9bfd3dad984b5aafffb8a9a5f8537f, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/0a2cb37d95e047a9bac8c2e40b44d0af, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/12371438138a4354918703122639ecf2, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/6e416ae560b94728a794bcbf97174a53, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/753e4f63c8fb4885a522996000aec5b1, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/670e416c87e7421d8acd3e33913d9226, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/f6aa7c42166b41ff9c91875276dd0950, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/e4264710bb22445cbfea8ed5307400ef, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/e387dcdf341a4e20bf8f114cf6516e24, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/689ea7fbfb7f48dab0c833effb640c09, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/5057955b211b4f7896f8203e5507a034, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/2f43da31db904f338da9a3e7749dda3e] to archive 2023-06-06 18:58:11,259 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-06 18:58:11,260 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/a75b5585e43741bda6d33d4f810b7452.2bbba3429b25e6edc94320062d822f80 to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/a75b5585e43741bda6d33d4f810b7452.2bbba3429b25e6edc94320062d822f80 2023-06-06 18:58:11,262 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/0b5b07ec1fe6418e973ae5e7a0f49089 to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/0b5b07ec1fe6418e973ae5e7a0f49089 2023-06-06 18:58:11,263 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/66f9b100ad1145dca31515d1da36076a to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/66f9b100ad1145dca31515d1da36076a 2023-06-06 18:58:11,264 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/878dd8c046ee45a8ae1c4c697f9a65e1 to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/878dd8c046ee45a8ae1c4c697f9a65e1 2023-06-06 18:58:11,265 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/9308194ba60c4aaeb79e7964de401d72 to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/9308194ba60c4aaeb79e7964de401d72 2023-06-06 18:58:11,266 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/a29a87b55b544ed9bbee7ec52d7e9151 to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/a29a87b55b544ed9bbee7ec52d7e9151 2023-06-06 18:58:11,267 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/c8c3f15267854a67b12cfbf76ba5aae3 to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/c8c3f15267854a67b12cfbf76ba5aae3 2023-06-06 18:58:11,268 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/861217bb7a3849498608f7b6c920709e to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/861217bb7a3849498608f7b6c920709e 2023-06-06 18:58:11,269 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/e9d74a76c26f423ea4f750f96b9af738 to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/e9d74a76c26f423ea4f750f96b9af738 2023-06-06 18:58:11,271 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/1721cc233c7845249cdc063a73d452bd to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/1721cc233c7845249cdc063a73d452bd 2023-06-06 18:58:11,272 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/0b9bfd3dad984b5aafffb8a9a5f8537f to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/0b9bfd3dad984b5aafffb8a9a5f8537f 2023-06-06 18:58:11,273 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/0a2cb37d95e047a9bac8c2e40b44d0af to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/0a2cb37d95e047a9bac8c2e40b44d0af 2023-06-06 18:58:11,274 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/12371438138a4354918703122639ecf2 to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/12371438138a4354918703122639ecf2 2023-06-06 18:58:11,275 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/6e416ae560b94728a794bcbf97174a53 to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/6e416ae560b94728a794bcbf97174a53 2023-06-06 18:58:11,276 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/753e4f63c8fb4885a522996000aec5b1 to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/753e4f63c8fb4885a522996000aec5b1 2023-06-06 18:58:11,276 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/670e416c87e7421d8acd3e33913d9226 to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/670e416c87e7421d8acd3e33913d9226 2023-06-06 18:58:11,277 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/f6aa7c42166b41ff9c91875276dd0950 to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/f6aa7c42166b41ff9c91875276dd0950 2023-06-06 18:58:11,278 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/e4264710bb22445cbfea8ed5307400ef to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/e4264710bb22445cbfea8ed5307400ef 2023-06-06 18:58:11,279 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/e387dcdf341a4e20bf8f114cf6516e24 to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/e387dcdf341a4e20bf8f114cf6516e24 2023-06-06 18:58:11,280 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/689ea7fbfb7f48dab0c833effb640c09 to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/689ea7fbfb7f48dab0c833effb640c09 2023-06-06 18:58:11,280 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/5057955b211b4f7896f8203e5507a034 to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/5057955b211b4f7896f8203e5507a034 2023-06-06 18:58:11,281 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/2f43da31db904f338da9a3e7749dda3e to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/info/2f43da31db904f338da9a3e7749dda3e 2023-06-06 18:58:11,286 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/fc3cee2da8965b24feceaad789cf1296/recovered.edits/333.seqid, newMaxSeqId=333, maxSeqId=85 2023-06-06 18:58:11,287 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296. 2023-06-06 18:58:11,287 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for fc3cee2da8965b24feceaad789cf1296: 2023-06-06 18:58:11,287 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,row0062,1686077846431.fc3cee2da8965b24feceaad789cf1296. 2023-06-06 18:58:11,288 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing eb3ff360e401d921fe58a4b8c8476b44, disabling compactions & flushes 2023-06-06 18:58:11,288 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1686077846431.eb3ff360e401d921fe58a4b8c8476b44. 2023-06-06 18:58:11,288 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1686077846431.eb3ff360e401d921fe58a4b8c8476b44. 2023-06-06 18:58:11,288 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1686077846431.eb3ff360e401d921fe58a4b8c8476b44. after waiting 0 ms 2023-06-06 18:58:11,288 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1686077846431.eb3ff360e401d921fe58a4b8c8476b44. 2023-06-06 18:58:11,288 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686077846431.eb3ff360e401d921fe58a4b8c8476b44.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/eb3ff360e401d921fe58a4b8c8476b44/info/a75b5585e43741bda6d33d4f810b7452.2bbba3429b25e6edc94320062d822f80->hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/2bbba3429b25e6edc94320062d822f80/info/a75b5585e43741bda6d33d4f810b7452-bottom] to archive 2023-06-06 18:58:11,289 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686077846431.eb3ff360e401d921fe58a4b8c8476b44.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-06 18:58:11,291 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686077846431.eb3ff360e401d921fe58a4b8c8476b44.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/eb3ff360e401d921fe58a4b8c8476b44/info/a75b5585e43741bda6d33d4f810b7452.2bbba3429b25e6edc94320062d822f80 to hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/archive/data/default/TestLogRolling-testLogRolling/eb3ff360e401d921fe58a4b8c8476b44/info/a75b5585e43741bda6d33d4f810b7452.2bbba3429b25e6edc94320062d822f80 2023-06-06 18:58:11,294 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/data/default/TestLogRolling-testLogRolling/eb3ff360e401d921fe58a4b8c8476b44/recovered.edits/90.seqid, newMaxSeqId=90, maxSeqId=85 2023-06-06 18:58:11,295 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1686077846431.eb3ff360e401d921fe58a4b8c8476b44. 2023-06-06 18:58:11,295 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for eb3ff360e401d921fe58a4b8c8476b44: 2023-06-06 18:58:11,295 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,,1686077846431.eb3ff360e401d921fe58a4b8c8476b44. 2023-06-06 18:58:11,296 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-06-06 18:58:11,296 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-06-06 18:58:11,432 INFO [RS:0;jenkins-hbase20:43527] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,43527,1686077822982; all regions closed. 2023-06-06 18:58:11,433 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/WALs/jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:58:11,443 DEBUG [RS:0;jenkins-hbase20:43527] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/oldWALs 2023-06-06 18:58:11,443 INFO [RS:0;jenkins-hbase20:43527] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C43527%2C1686077822982.meta:.meta(num 1686077823507) 2023-06-06 18:58:11,443 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/WALs/jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:58:11,452 DEBUG [RS:0;jenkins-hbase20:43527] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/oldWALs 2023-06-06 18:58:11,452 INFO [RS:0;jenkins-hbase20:43527] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C43527%2C1686077822982:(num 1686077891106) 2023-06-06 18:58:11,452 DEBUG [RS:0;jenkins-hbase20:43527] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:58:11,452 INFO [RS:0;jenkins-hbase20:43527] regionserver.LeaseManager(133): Closed leases 2023-06-06 18:58:11,452 INFO [RS:0;jenkins-hbase20:43527] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-06-06 18:58:11,453 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-06 18:58:11,454 INFO [RS:0;jenkins-hbase20:43527] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:43527 2023-06-06 18:58:11,458 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): regionserver:43527-0x101c1c7eff50001, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,43527,1686077822982 2023-06-06 18:58:11,458 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-06 18:58:11,458 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): regionserver:43527-0x101c1c7eff50001, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-06 18:58:11,459 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,43527,1686077822982] 2023-06-06 18:58:11,459 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,43527,1686077822982; numProcessing=1 2023-06-06 18:58:11,460 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,43527,1686077822982 already deleted, retry=false 2023-06-06 18:58:11,460 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,43527,1686077822982 expired; onlineServers=0 2023-06-06 18:58:11,460 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,33223,1686077822941' ***** 2023-06-06 18:58:11,461 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-06 18:58:11,461 DEBUG [M:0;jenkins-hbase20:33223] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7e6ed839, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-06 18:58:11,461 INFO [M:0;jenkins-hbase20:33223] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,33223,1686077822941 2023-06-06 18:58:11,461 INFO [M:0;jenkins-hbase20:33223] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,33223,1686077822941; all regions closed. 2023-06-06 18:58:11,461 DEBUG [M:0;jenkins-hbase20:33223] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:58:11,462 DEBUG [M:0;jenkins-hbase20:33223] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-06 18:58:11,462 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-06 18:58:11,462 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686077823146] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686077823146,5,FailOnTimeoutGroup] 2023-06-06 18:58:11,462 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686077823146] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686077823146,5,FailOnTimeoutGroup] 2023-06-06 18:58:11,462 DEBUG [M:0;jenkins-hbase20:33223] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-06 18:58:11,463 INFO [M:0;jenkins-hbase20:33223] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-06 18:58:11,464 INFO [M:0;jenkins-hbase20:33223] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-06 18:58:11,464 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-06 18:58:11,464 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:58:11,464 INFO [M:0;jenkins-hbase20:33223] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-06-06 18:58:11,464 DEBUG [M:0;jenkins-hbase20:33223] master.HMaster(1512): Stopping service threads 2023-06-06 18:58:11,464 INFO [M:0;jenkins-hbase20:33223] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-06 18:58:11,465 ERROR [M:0;jenkins-hbase20:33223] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-06 18:58:11,465 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-06 18:58:11,465 INFO [M:0;jenkins-hbase20:33223] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-06 18:58:11,465 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-06 18:58:11,465 DEBUG [M:0;jenkins-hbase20:33223] zookeeper.ZKUtil(398): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-06 18:58:11,465 WARN [M:0;jenkins-hbase20:33223] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-06 18:58:11,465 INFO [M:0;jenkins-hbase20:33223] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-06 18:58:11,466 INFO [M:0;jenkins-hbase20:33223] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-06 18:58:11,466 DEBUG [M:0;jenkins-hbase20:33223] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-06 18:58:11,466 INFO [M:0;jenkins-hbase20:33223] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:58:11,466 DEBUG [M:0;jenkins-hbase20:33223] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:58:11,466 DEBUG [M:0;jenkins-hbase20:33223] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-06 18:58:11,466 DEBUG [M:0;jenkins-hbase20:33223] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:58:11,466 INFO [M:0;jenkins-hbase20:33223] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=64.78 KB heapSize=78.52 KB 2023-06-06 18:58:11,475 INFO [M:0;jenkins-hbase20:33223] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=64.78 KB at sequenceid=160 (bloomFilter=true), to=hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/547ad244588245679927a5c7f6dfd274 2023-06-06 18:58:11,479 INFO [M:0;jenkins-hbase20:33223] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 547ad244588245679927a5c7f6dfd274 2023-06-06 18:58:11,480 DEBUG [M:0;jenkins-hbase20:33223] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/547ad244588245679927a5c7f6dfd274 as hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/547ad244588245679927a5c7f6dfd274 2023-06-06 18:58:11,484 INFO [M:0;jenkins-hbase20:33223] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 547ad244588245679927a5c7f6dfd274 2023-06-06 18:58:11,484 INFO [M:0;jenkins-hbase20:33223] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33225/user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/547ad244588245679927a5c7f6dfd274, entries=18, sequenceid=160, filesize=6.9 K 2023-06-06 18:58:11,485 INFO [M:0;jenkins-hbase20:33223] regionserver.HRegion(2948): Finished flush of dataSize ~64.78 KB/66332, heapSize ~78.51 KB/80392, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 19ms, sequenceid=160, compaction requested=false 2023-06-06 18:58:11,487 INFO [M:0;jenkins-hbase20:33223] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:58:11,487 DEBUG [M:0;jenkins-hbase20:33223] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-06 18:58:11,487 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/f1ab03d4-851d-54c1-c47e-4fb22f9bd457/MasterData/WALs/jenkins-hbase20.apache.org,33223,1686077822941 2023-06-06 18:58:11,491 INFO [M:0;jenkins-hbase20:33223] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-06 18:58:11,491 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-06 18:58:11,491 INFO [M:0;jenkins-hbase20:33223] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:33223 2023-06-06 18:58:11,493 DEBUG [M:0;jenkins-hbase20:33223] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,33223,1686077822941 already deleted, retry=false 2023-06-06 18:58:11,559 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): regionserver:43527-0x101c1c7eff50001, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:58:11,559 INFO [RS:0;jenkins-hbase20:43527] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,43527,1686077822982; zookeeper connection closed. 2023-06-06 18:58:11,560 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): regionserver:43527-0x101c1c7eff50001, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:58:11,561 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@16dde81f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@16dde81f 2023-06-06 18:58:11,561 INFO [Listener at localhost.localdomain/32863] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-06 18:58:11,660 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:58:11,660 INFO [M:0;jenkins-hbase20:33223] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,33223,1686077822941; zookeeper connection closed. 2023-06-06 18:58:11,660 DEBUG [Listener at localhost.localdomain/32863-EventThread] zookeeper.ZKWatcher(600): master:33223-0x101c1c7eff50000, quorum=127.0.0.1:55735, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:58:11,663 WARN [Listener at localhost.localdomain/32863] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-06 18:58:11,671 INFO [Listener at localhost.localdomain/32863] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-06 18:58:11,787 WARN [BP-76056277-148.251.75.209-1686077822440 heartbeating to localhost.localdomain/127.0.0.1:33225] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-06 18:58:11,787 WARN [BP-76056277-148.251.75.209-1686077822440 heartbeating to localhost.localdomain/127.0.0.1:33225] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-76056277-148.251.75.209-1686077822440 (Datanode Uuid 55668ead-aee2-42e9-b1ef-2230f227b211) service to localhost.localdomain/127.0.0.1:33225 2023-06-06 18:58:11,788 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/cluster_22b25afb-8853-3f38-2fdd-fe14f7b08979/dfs/data/data3/current/BP-76056277-148.251.75.209-1686077822440] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:58:11,789 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/cluster_22b25afb-8853-3f38-2fdd-fe14f7b08979/dfs/data/data4/current/BP-76056277-148.251.75.209-1686077822440] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:58:11,790 WARN [Listener at localhost.localdomain/32863] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-06 18:58:11,795 INFO [Listener at localhost.localdomain/32863] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-06 18:58:11,904 WARN [BP-76056277-148.251.75.209-1686077822440 heartbeating to localhost.localdomain/127.0.0.1:33225] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-06 18:58:11,904 WARN [BP-76056277-148.251.75.209-1686077822440 heartbeating to localhost.localdomain/127.0.0.1:33225] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-76056277-148.251.75.209-1686077822440 (Datanode Uuid a4b89957-1c17-4585-9fb2-e530f992d50a) service to localhost.localdomain/127.0.0.1:33225 2023-06-06 18:58:11,906 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/cluster_22b25afb-8853-3f38-2fdd-fe14f7b08979/dfs/data/data1/current/BP-76056277-148.251.75.209-1686077822440] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:58:11,907 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/cluster_22b25afb-8853-3f38-2fdd-fe14f7b08979/dfs/data/data2/current/BP-76056277-148.251.75.209-1686077822440] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:58:11,924 INFO [Listener at localhost.localdomain/32863] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-06-06 18:58:12,043 INFO [Listener at localhost.localdomain/32863] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-06 18:58:12,073 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-06 18:58:12,082 INFO [Listener at localhost.localdomain/32863] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRolling Thread=105 (was 93) - Thread LEAK? -, OpenFileDescriptor=544 (was 498) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=77 (was 110), ProcessCount=165 (was 166), AvailableMemoryMB=4903 (was 5532) 2023-06-06 18:58:12,090 INFO [Listener at localhost.localdomain/32863] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=105, OpenFileDescriptor=544, MaxFileDescriptor=60000, SystemLoadAverage=77, ProcessCount=165, AvailableMemoryMB=4902 2023-06-06 18:58:12,091 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-06 18:58:12,091 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/hadoop.log.dir so I do NOT create it in target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31 2023-06-06 18:58:12,091 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00181bf5-2696-27d6-9420-4935fde89394/hadoop.tmp.dir so I do NOT create it in target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31 2023-06-06 18:58:12,091 INFO [Listener at localhost.localdomain/32863] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/cluster_4ae871bb-e9cf-5d10-79e9-8eb0b977ccfd, deleteOnExit=true 2023-06-06 18:58:12,091 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-06 18:58:12,091 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/test.cache.data in system properties and HBase conf 2023-06-06 18:58:12,091 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/hadoop.tmp.dir in system properties and HBase conf 2023-06-06 18:58:12,092 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/hadoop.log.dir in system properties and HBase conf 2023-06-06 18:58:12,092 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-06 18:58:12,092 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-06 18:58:12,092 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-06 18:58:12,092 DEBUG [Listener at localhost.localdomain/32863] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-06 18:58:12,092 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-06 18:58:12,092 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-06 18:58:12,092 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-06 18:58:12,093 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-06 18:58:12,093 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-06 18:58:12,093 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-06 18:58:12,093 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-06 18:58:12,093 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-06 18:58:12,093 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-06 18:58:12,093 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/nfs.dump.dir in system properties and HBase conf 2023-06-06 18:58:12,093 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/java.io.tmpdir in system properties and HBase conf 2023-06-06 18:58:12,093 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-06 18:58:12,094 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-06 18:58:12,094 INFO [Listener at localhost.localdomain/32863] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-06 18:58:12,095 WARN [Listener at localhost.localdomain/32863] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-06 18:58:12,097 WARN [Listener at localhost.localdomain/32863] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-06 18:58:12,097 WARN [Listener at localhost.localdomain/32863] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-06 18:58:12,121 WARN [Listener at localhost.localdomain/32863] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:58:12,123 INFO [Listener at localhost.localdomain/32863] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:58:12,128 INFO [Listener at localhost.localdomain/32863] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/java.io.tmpdir/Jetty_localhost_localdomain_37135_hdfs____j32zih/webapp 2023-06-06 18:58:12,198 INFO [Listener at localhost.localdomain/32863] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:37135 2023-06-06 18:58:12,199 WARN [Listener at localhost.localdomain/32863] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-06 18:58:12,200 WARN [Listener at localhost.localdomain/32863] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-06 18:58:12,200 WARN [Listener at localhost.localdomain/32863] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-06 18:58:12,225 WARN [Listener at localhost.localdomain/41893] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:58:12,239 WARN [Listener at localhost.localdomain/41893] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-06 18:58:12,242 WARN [Listener at localhost.localdomain/41893] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:58:12,242 INFO [Listener at localhost.localdomain/41893] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:58:12,247 INFO [Listener at localhost.localdomain/41893] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/java.io.tmpdir/Jetty_localhost_38179_datanode____.bzq2tn/webapp 2023-06-06 18:58:12,316 INFO [Listener at localhost.localdomain/41893] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38179 2023-06-06 18:58:12,322 WARN [Listener at localhost.localdomain/43651] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:58:12,333 WARN [Listener at localhost.localdomain/43651] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-06 18:58:12,335 WARN [Listener at localhost.localdomain/43651] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-06 18:58:12,336 INFO [Listener at localhost.localdomain/43651] log.Slf4jLog(67): jetty-6.1.26 2023-06-06 18:58:12,339 INFO [Listener at localhost.localdomain/43651] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/java.io.tmpdir/Jetty_localhost_38431_datanode____.6ztbpo/webapp 2023-06-06 18:58:12,376 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1d9382ca7f11cdfd: Processing first storage report for DS-a699bf78-4ac9-4790-9e88-146e2dec87b6 from datanode a29bcd60-23d6-4acc-a0d3-d541690f86bc 2023-06-06 18:58:12,376 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1d9382ca7f11cdfd: from storage DS-a699bf78-4ac9-4790-9e88-146e2dec87b6 node DatanodeRegistration(127.0.0.1:42895, datanodeUuid=a29bcd60-23d6-4acc-a0d3-d541690f86bc, infoPort=44603, infoSecurePort=0, ipcPort=43651, storageInfo=lv=-57;cid=testClusterID;nsid=67437708;c=1686077892098), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-06 18:58:12,376 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1d9382ca7f11cdfd: Processing first storage report for DS-50b7b5db-960e-4e54-9852-7bdd25e07ea2 from datanode a29bcd60-23d6-4acc-a0d3-d541690f86bc 2023-06-06 18:58:12,376 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1d9382ca7f11cdfd: from storage DS-50b7b5db-960e-4e54-9852-7bdd25e07ea2 node DatanodeRegistration(127.0.0.1:42895, datanodeUuid=a29bcd60-23d6-4acc-a0d3-d541690f86bc, infoPort=44603, infoSecurePort=0, ipcPort=43651, storageInfo=lv=-57;cid=testClusterID;nsid=67437708;c=1686077892098), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:58:12,419 INFO [Listener at localhost.localdomain/43651] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38431 2023-06-06 18:58:12,425 WARN [Listener at localhost.localdomain/37875] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-06 18:58:12,488 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xceb0762888dafa84: Processing first storage report for DS-c63673dd-d850-4d92-a6c5-d5783beb9c26 from datanode 2fe246dc-f9aa-4720-8e91-4e97118ccd9b 2023-06-06 18:58:12,489 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xceb0762888dafa84: from storage DS-c63673dd-d850-4d92-a6c5-d5783beb9c26 node DatanodeRegistration(127.0.0.1:44849, datanodeUuid=2fe246dc-f9aa-4720-8e91-4e97118ccd9b, infoPort=37149, infoSecurePort=0, ipcPort=37875, storageInfo=lv=-57;cid=testClusterID;nsid=67437708;c=1686077892098), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:58:12,489 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xceb0762888dafa84: Processing first storage report for DS-58484f55-d753-4630-a7c0-0653599c1deb from datanode 2fe246dc-f9aa-4720-8e91-4e97118ccd9b 2023-06-06 18:58:12,489 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xceb0762888dafa84: from storage DS-58484f55-d753-4630-a7c0-0653599c1deb node DatanodeRegistration(127.0.0.1:44849, datanodeUuid=2fe246dc-f9aa-4720-8e91-4e97118ccd9b, infoPort=37149, infoSecurePort=0, ipcPort=37875, storageInfo=lv=-57;cid=testClusterID;nsid=67437708;c=1686077892098), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-06 18:58:12,534 DEBUG [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31 2023-06-06 18:58:12,537 INFO [Listener at localhost.localdomain/37875] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/cluster_4ae871bb-e9cf-5d10-79e9-8eb0b977ccfd/zookeeper_0, clientPort=55318, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/cluster_4ae871bb-e9cf-5d10-79e9-8eb0b977ccfd/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/cluster_4ae871bb-e9cf-5d10-79e9-8eb0b977ccfd/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-06 18:58:12,538 INFO [Listener at localhost.localdomain/37875] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=55318 2023-06-06 18:58:12,538 INFO [Listener at localhost.localdomain/37875] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:58:12,539 INFO [Listener at localhost.localdomain/37875] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:58:12,557 INFO [Listener at localhost.localdomain/37875] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef with version=8 2023-06-06 18:58:12,558 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:34031/user/jenkins/test-data/9210301d-7d5d-dee4-1c11-d55ef3b914bf/hbase-staging 2023-06-06 18:58:12,560 INFO [Listener at localhost.localdomain/37875] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-06-06 18:58:12,560 INFO [Listener at localhost.localdomain/37875] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:58:12,561 INFO [Listener at localhost.localdomain/37875] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-06 18:58:12,561 INFO [Listener at localhost.localdomain/37875] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-06 18:58:12,561 INFO [Listener at localhost.localdomain/37875] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:58:12,561 INFO [Listener at localhost.localdomain/37875] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-06 18:58:12,561 INFO [Listener at localhost.localdomain/37875] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-06 18:58:12,564 INFO [Listener at localhost.localdomain/37875] ipc.NettyRpcServer(120): Bind to /148.251.75.209:34111 2023-06-06 18:58:12,565 INFO [Listener at localhost.localdomain/37875] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:58:12,566 INFO [Listener at localhost.localdomain/37875] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:58:12,567 INFO [Listener at localhost.localdomain/37875] zookeeper.RecoverableZooKeeper(93): Process identifier=master:34111 connecting to ZooKeeper ensemble=127.0.0.1:55318 2023-06-06 18:58:12,572 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:341110x0, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-06 18:58:12,573 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:34111-0x101c1c8ffe30000 connected 2023-06-06 18:58:12,583 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ZKUtil(164): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-06 18:58:12,583 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ZKUtil(164): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:58:12,584 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ZKUtil(164): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-06 18:58:12,584 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34111 2023-06-06 18:58:12,584 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34111 2023-06-06 18:58:12,584 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34111 2023-06-06 18:58:12,585 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34111 2023-06-06 18:58:12,585 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34111 2023-06-06 18:58:12,585 INFO [Listener at localhost.localdomain/37875] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef, hbase.cluster.distributed=false 2023-06-06 18:58:12,599 INFO [Listener at localhost.localdomain/37875] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-06-06 18:58:12,599 INFO [Listener at localhost.localdomain/37875] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:58:12,599 INFO [Listener at localhost.localdomain/37875] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-06 18:58:12,599 INFO [Listener at localhost.localdomain/37875] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-06 18:58:12,599 INFO [Listener at localhost.localdomain/37875] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-06 18:58:12,599 INFO [Listener at localhost.localdomain/37875] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-06 18:58:12,600 INFO [Listener at localhost.localdomain/37875] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-06 18:58:12,601 INFO [Listener at localhost.localdomain/37875] ipc.NettyRpcServer(120): Bind to /148.251.75.209:44481 2023-06-06 18:58:12,602 INFO [Listener at localhost.localdomain/37875] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-06 18:58:12,603 DEBUG [Listener at localhost.localdomain/37875] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-06 18:58:12,604 INFO [Listener at localhost.localdomain/37875] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:58:12,605 INFO [Listener at localhost.localdomain/37875] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:58:12,606 INFO [Listener at localhost.localdomain/37875] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44481 connecting to ZooKeeper ensemble=127.0.0.1:55318 2023-06-06 18:58:12,613 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:444810x0, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-06 18:58:12,615 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ZKUtil(164): regionserver:444810x0, quorum=127.0.0.1:55318, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-06 18:58:12,615 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44481-0x101c1c8ffe30001 connected 2023-06-06 18:58:12,616 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ZKUtil(164): regionserver:44481-0x101c1c8ffe30001, quorum=127.0.0.1:55318, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:58:12,616 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ZKUtil(164): regionserver:44481-0x101c1c8ffe30001, quorum=127.0.0.1:55318, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-06 18:58:12,617 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44481 2023-06-06 18:58:12,617 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44481 2023-06-06 18:58:12,617 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44481 2023-06-06 18:58:12,617 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44481 2023-06-06 18:58:12,617 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44481 2023-06-06 18:58:12,618 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,34111,1686077892560 2023-06-06 18:58:12,632 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-06 18:58:12,632 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,34111,1686077892560 2023-06-06 18:58:12,633 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-06 18:58:12,633 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:44481-0x101c1c8ffe30001, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-06 18:58:12,633 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:58:12,634 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-06 18:58:12,635 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,34111,1686077892560 from backup master directory 2023-06-06 18:58:12,635 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-06 18:58:12,636 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,34111,1686077892560 2023-06-06 18:58:12,636 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-06 18:58:12,636 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-06 18:58:12,636 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,34111,1686077892560 2023-06-06 18:58:12,654 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/hbase.id with ID: 8c658b06-c01d-4c7a-9baf-f128a5cc3c84 2023-06-06 18:58:12,664 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:58:12,666 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:58:12,675 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x1835cfad to 127.0.0.1:55318 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-06 18:58:12,683 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1ba93ee1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-06 18:58:12,683 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-06 18:58:12,684 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-06 18:58:12,684 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-06 18:58:12,685 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/MasterData/data/master/store-tmp 2023-06-06 18:58:12,693 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:58:12,693 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-06 18:58:12,693 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:58:12,693 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:58:12,693 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-06 18:58:12,693 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:58:12,693 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:58:12,693 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-06 18:58:12,694 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/MasterData/WALs/jenkins-hbase20.apache.org,34111,1686077892560 2023-06-06 18:58:12,696 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C34111%2C1686077892560, suffix=, logDir=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/MasterData/WALs/jenkins-hbase20.apache.org,34111,1686077892560, archiveDir=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/MasterData/oldWALs, maxLogs=10 2023-06-06 18:58:12,702 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/MasterData/WALs/jenkins-hbase20.apache.org,34111,1686077892560/jenkins-hbase20.apache.org%2C34111%2C1686077892560.1686077892697 2023-06-06 18:58:12,702 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44849,DS-c63673dd-d850-4d92-a6c5-d5783beb9c26,DISK], DatanodeInfoWithStorage[127.0.0.1:42895,DS-a699bf78-4ac9-4790-9e88-146e2dec87b6,DISK]] 2023-06-06 18:58:12,702 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-06 18:58:12,702 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:58:12,702 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:58:12,702 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:58:12,704 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:58:12,705 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-06 18:58:12,706 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-06 18:58:12,706 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:58:12,707 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:58:12,707 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:58:12,710 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-06 18:58:12,711 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-06 18:58:12,712 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=700272, jitterRate=-0.10955856740474701}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-06 18:58:12,712 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-06 18:58:12,712 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-06 18:58:12,713 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-06 18:58:12,713 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-06 18:58:12,713 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-06 18:58:12,713 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-06 18:58:12,713 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-06 18:58:12,713 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-06 18:58:12,718 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-06 18:58:12,719 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-06 18:58:12,728 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-06 18:58:12,728 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-06 18:58:12,729 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-06 18:58:12,729 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-06 18:58:12,730 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-06 18:58:12,731 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:58:12,732 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-06 18:58:12,732 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-06 18:58:12,733 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-06 18:58:12,733 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-06 18:58:12,733 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:44481-0x101c1c8ffe30001, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-06 18:58:12,733 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:58:12,734 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,34111,1686077892560, sessionid=0x101c1c8ffe30000, setting cluster-up flag (Was=false) 2023-06-06 18:58:12,737 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:58:12,740 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-06 18:58:12,741 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,34111,1686077892560 2023-06-06 18:58:12,743 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:58:12,745 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-06 18:58:12,746 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,34111,1686077892560 2023-06-06 18:58:12,746 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/.hbase-snapshot/.tmp 2023-06-06 18:58:12,749 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-06 18:58:12,749 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-06 18:58:12,749 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-06 18:58:12,749 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-06 18:58:12,749 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-06 18:58:12,749 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-06-06 18:58:12,749 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:58:12,750 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-06 18:58:12,750 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:58:12,758 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1686077922758 2023-06-06 18:58:12,758 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-06 18:58:12,758 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-06 18:58:12,758 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-06 18:58:12,758 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-06 18:58:12,758 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-06 18:58:12,758 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-06 18:58:12,758 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-06 18:58:12,759 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-06 18:58:12,759 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-06 18:58:12,759 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-06 18:58:12,760 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-06 18:58:12,760 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-06 18:58:12,761 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-06 18:58:12,762 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-06 18:58:12,762 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-06 18:58:12,762 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686077892762,5,FailOnTimeoutGroup] 2023-06-06 18:58:12,763 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686077892763,5,FailOnTimeoutGroup] 2023-06-06 18:58:12,763 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-06 18:58:12,763 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-06 18:58:12,763 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-06 18:58:12,763 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-06 18:58:12,770 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-06 18:58:12,771 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-06 18:58:12,771 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef 2023-06-06 18:58:12,781 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:58:12,783 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-06 18:58:12,784 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/1588230740/info 2023-06-06 18:58:12,784 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-06 18:58:12,785 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:58:12,785 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-06 18:58:12,786 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/1588230740/rep_barrier 2023-06-06 18:58:12,786 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-06 18:58:12,787 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:58:12,787 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-06 18:58:12,788 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/1588230740/table 2023-06-06 18:58:12,788 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-06 18:58:12,788 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:58:12,789 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/1588230740 2023-06-06 18:58:12,789 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/1588230740 2023-06-06 18:58:12,791 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-06 18:58:12,793 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-06 18:58:12,794 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-06 18:58:12,795 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=830974, jitterRate=0.056638672947883606}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-06 18:58:12,795 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-06 18:58:12,795 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-06 18:58:12,795 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-06 18:58:12,795 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-06 18:58:12,795 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-06 18:58:12,795 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-06 18:58:12,796 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-06 18:58:12,796 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-06 18:58:12,796 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-06 18:58:12,796 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-06 18:58:12,797 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-06 18:58:12,798 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-06 18:58:12,799 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-06 18:58:12,820 INFO [RS:0;jenkins-hbase20:44481] regionserver.HRegionServer(951): ClusterId : 8c658b06-c01d-4c7a-9baf-f128a5cc3c84 2023-06-06 18:58:12,821 DEBUG [RS:0;jenkins-hbase20:44481] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-06 18:58:12,823 DEBUG [RS:0;jenkins-hbase20:44481] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-06 18:58:12,823 DEBUG [RS:0;jenkins-hbase20:44481] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-06 18:58:12,825 DEBUG [RS:0;jenkins-hbase20:44481] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-06 18:58:12,826 DEBUG [RS:0;jenkins-hbase20:44481] zookeeper.ReadOnlyZKClient(139): Connect 0x00f35e61 to 127.0.0.1:55318 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-06 18:58:12,838 DEBUG [RS:0;jenkins-hbase20:44481] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@e34a00f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-06 18:58:12,838 DEBUG [RS:0;jenkins-hbase20:44481] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@a0b3511, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-06 18:58:12,845 DEBUG [RS:0;jenkins-hbase20:44481] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:44481 2023-06-06 18:58:12,845 INFO [RS:0;jenkins-hbase20:44481] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-06 18:58:12,845 INFO [RS:0;jenkins-hbase20:44481] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-06 18:58:12,845 DEBUG [RS:0;jenkins-hbase20:44481] regionserver.HRegionServer(1022): About to register with Master. 2023-06-06 18:58:12,846 INFO [RS:0;jenkins-hbase20:44481] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,34111,1686077892560 with isa=jenkins-hbase20.apache.org/148.251.75.209:44481, startcode=1686077892598 2023-06-06 18:58:12,846 DEBUG [RS:0;jenkins-hbase20:44481] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-06 18:58:12,850 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:36669, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-06-06 18:58:12,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34111] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,44481,1686077892598 2023-06-06 18:58:12,851 DEBUG [RS:0;jenkins-hbase20:44481] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef 2023-06-06 18:58:12,851 DEBUG [RS:0;jenkins-hbase20:44481] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:41893 2023-06-06 18:58:12,851 DEBUG [RS:0;jenkins-hbase20:44481] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-06 18:58:12,853 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-06 18:58:12,853 DEBUG [RS:0;jenkins-hbase20:44481] zookeeper.ZKUtil(162): regionserver:44481-0x101c1c8ffe30001, quorum=127.0.0.1:55318, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44481,1686077892598 2023-06-06 18:58:12,853 WARN [RS:0;jenkins-hbase20:44481] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-06 18:58:12,853 INFO [RS:0;jenkins-hbase20:44481] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-06 18:58:12,854 DEBUG [RS:0;jenkins-hbase20:44481] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/WALs/jenkins-hbase20.apache.org,44481,1686077892598 2023-06-06 18:58:12,854 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,44481,1686077892598] 2023-06-06 18:58:12,858 DEBUG [RS:0;jenkins-hbase20:44481] zookeeper.ZKUtil(162): regionserver:44481-0x101c1c8ffe30001, quorum=127.0.0.1:55318, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44481,1686077892598 2023-06-06 18:58:12,859 DEBUG [RS:0;jenkins-hbase20:44481] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-06 18:58:12,859 INFO [RS:0;jenkins-hbase20:44481] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-06 18:58:12,860 INFO [RS:0;jenkins-hbase20:44481] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-06 18:58:12,861 INFO [RS:0;jenkins-hbase20:44481] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-06 18:58:12,861 INFO [RS:0;jenkins-hbase20:44481] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-06 18:58:12,861 INFO [RS:0;jenkins-hbase20:44481] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-06 18:58:12,862 INFO [RS:0;jenkins-hbase20:44481] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-06 18:58:12,863 DEBUG [RS:0;jenkins-hbase20:44481] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:58:12,863 DEBUG [RS:0;jenkins-hbase20:44481] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:58:12,863 DEBUG [RS:0;jenkins-hbase20:44481] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:58:12,863 DEBUG [RS:0;jenkins-hbase20:44481] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:58:12,863 DEBUG [RS:0;jenkins-hbase20:44481] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:58:12,863 DEBUG [RS:0;jenkins-hbase20:44481] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-06 18:58:12,863 DEBUG [RS:0;jenkins-hbase20:44481] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:58:12,863 DEBUG [RS:0;jenkins-hbase20:44481] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:58:12,863 DEBUG [RS:0;jenkins-hbase20:44481] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:58:12,863 DEBUG [RS:0;jenkins-hbase20:44481] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-06 18:58:12,867 INFO [RS:0;jenkins-hbase20:44481] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-06 18:58:12,868 INFO [RS:0;jenkins-hbase20:44481] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-06 18:58:12,868 INFO [RS:0;jenkins-hbase20:44481] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-06 18:58:12,876 INFO [RS:0;jenkins-hbase20:44481] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-06 18:58:12,877 INFO [RS:0;jenkins-hbase20:44481] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44481,1686077892598-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-06 18:58:12,886 INFO [RS:0;jenkins-hbase20:44481] regionserver.Replication(203): jenkins-hbase20.apache.org,44481,1686077892598 started 2023-06-06 18:58:12,886 INFO [RS:0;jenkins-hbase20:44481] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,44481,1686077892598, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:44481, sessionid=0x101c1c8ffe30001 2023-06-06 18:58:12,886 DEBUG [RS:0;jenkins-hbase20:44481] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-06 18:58:12,886 DEBUG [RS:0;jenkins-hbase20:44481] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,44481,1686077892598 2023-06-06 18:58:12,886 DEBUG [RS:0;jenkins-hbase20:44481] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,44481,1686077892598' 2023-06-06 18:58:12,886 DEBUG [RS:0;jenkins-hbase20:44481] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-06 18:58:12,886 DEBUG [RS:0;jenkins-hbase20:44481] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-06 18:58:12,887 DEBUG [RS:0;jenkins-hbase20:44481] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-06 18:58:12,887 DEBUG [RS:0;jenkins-hbase20:44481] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-06 18:58:12,887 DEBUG [RS:0;jenkins-hbase20:44481] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,44481,1686077892598 2023-06-06 18:58:12,887 DEBUG [RS:0;jenkins-hbase20:44481] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,44481,1686077892598' 2023-06-06 18:58:12,887 DEBUG [RS:0;jenkins-hbase20:44481] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-06 18:58:12,887 DEBUG [RS:0;jenkins-hbase20:44481] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-06 18:58:12,888 DEBUG [RS:0;jenkins-hbase20:44481] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-06 18:58:12,888 INFO [RS:0;jenkins-hbase20:44481] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-06 18:58:12,888 INFO [RS:0;jenkins-hbase20:44481] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-06 18:58:12,949 DEBUG [jenkins-hbase20:34111] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-06 18:58:12,950 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,44481,1686077892598, state=OPENING 2023-06-06 18:58:12,951 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-06 18:58:12,952 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:58:12,952 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,44481,1686077892598}] 2023-06-06 18:58:12,952 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-06 18:58:12,991 INFO [RS:0;jenkins-hbase20:44481] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44481%2C1686077892598, suffix=, logDir=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/WALs/jenkins-hbase20.apache.org,44481,1686077892598, archiveDir=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/oldWALs, maxLogs=32 2023-06-06 18:58:13,001 INFO [RS:0;jenkins-hbase20:44481] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/WALs/jenkins-hbase20.apache.org,44481,1686077892598/jenkins-hbase20.apache.org%2C44481%2C1686077892598.1686077892991 2023-06-06 18:58:13,002 DEBUG [RS:0;jenkins-hbase20:44481] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42895,DS-a699bf78-4ac9-4790-9e88-146e2dec87b6,DISK], DatanodeInfoWithStorage[127.0.0.1:44849,DS-c63673dd-d850-4d92-a6c5-d5783beb9c26,DISK]] 2023-06-06 18:58:13,107 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,44481,1686077892598 2023-06-06 18:58:13,107 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-06 18:58:13,112 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:47718, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-06 18:58:13,119 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-06 18:58:13,119 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-06 18:58:13,121 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44481%2C1686077892598.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/WALs/jenkins-hbase20.apache.org,44481,1686077892598, archiveDir=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/oldWALs, maxLogs=32 2023-06-06 18:58:13,128 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/WALs/jenkins-hbase20.apache.org,44481,1686077892598/jenkins-hbase20.apache.org%2C44481%2C1686077892598.meta.1686077893121.meta 2023-06-06 18:58:13,128 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44849,DS-c63673dd-d850-4d92-a6c5-d5783beb9c26,DISK], DatanodeInfoWithStorage[127.0.0.1:42895,DS-a699bf78-4ac9-4790-9e88-146e2dec87b6,DISK]] 2023-06-06 18:58:13,128 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-06 18:58:13,128 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-06 18:58:13,128 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-06 18:58:13,128 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-06 18:58:13,129 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-06 18:58:13,129 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:58:13,129 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-06 18:58:13,129 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-06 18:58:13,130 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-06 18:58:13,131 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/1588230740/info 2023-06-06 18:58:13,131 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/1588230740/info 2023-06-06 18:58:13,131 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-06 18:58:13,132 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:58:13,132 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-06 18:58:13,133 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/1588230740/rep_barrier 2023-06-06 18:58:13,133 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/1588230740/rep_barrier 2023-06-06 18:58:13,133 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-06 18:58:13,134 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:58:13,134 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-06 18:58:13,135 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/1588230740/table 2023-06-06 18:58:13,135 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/1588230740/table 2023-06-06 18:58:13,135 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-06 18:58:13,136 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:58:13,137 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/1588230740 2023-06-06 18:58:13,138 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/1588230740 2023-06-06 18:58:13,139 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-06 18:58:13,141 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-06 18:58:13,142 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=797431, jitterRate=0.013986095786094666}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-06 18:58:13,142 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-06 18:58:13,146 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1686077893107 2023-06-06 18:58:13,150 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-06 18:58:13,151 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-06 18:58:13,152 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,44481,1686077892598, state=OPEN 2023-06-06 18:58:13,154 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-06 18:58:13,154 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-06 18:58:13,156 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-06 18:58:13,156 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,44481,1686077892598 in 202 msec 2023-06-06 18:58:13,158 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-06 18:58:13,159 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 359 msec 2023-06-06 18:58:13,161 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 412 msec 2023-06-06 18:58:13,161 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1686077893161, completionTime=-1 2023-06-06 18:58:13,161 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-06 18:58:13,161 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-06 18:58:13,164 DEBUG [hconnection-0x13324fa7-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-06 18:58:13,167 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:47730, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-06 18:58:13,169 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-06 18:58:13,169 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1686077953169 2023-06-06 18:58:13,169 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1686078013169 2023-06-06 18:58:13,169 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 8 msec 2023-06-06 18:58:13,177 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,34111,1686077892560-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-06 18:58:13,177 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,34111,1686077892560-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-06 18:58:13,177 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,34111,1686077892560-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-06 18:58:13,177 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:34111, period=300000, unit=MILLISECONDS is enabled. 2023-06-06 18:58:13,177 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-06 18:58:13,178 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-06 18:58:13,178 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-06 18:58:13,179 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-06 18:58:13,179 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-06 18:58:13,181 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-06 18:58:13,182 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-06 18:58:13,184 DEBUG [HFileArchiver-11] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/.tmp/data/hbase/namespace/20943744edffa484120f311b1d15800e 2023-06-06 18:58:13,184 DEBUG [HFileArchiver-11] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/.tmp/data/hbase/namespace/20943744edffa484120f311b1d15800e empty. 2023-06-06 18:58:13,185 DEBUG [HFileArchiver-11] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/.tmp/data/hbase/namespace/20943744edffa484120f311b1d15800e 2023-06-06 18:58:13,185 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-06 18:58:13,195 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-06 18:58:13,196 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 20943744edffa484120f311b1d15800e, NAME => 'hbase:namespace,,1686077893178.20943744edffa484120f311b1d15800e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/.tmp 2023-06-06 18:58:13,204 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686077893178.20943744edffa484120f311b1d15800e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:58:13,204 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 20943744edffa484120f311b1d15800e, disabling compactions & flushes 2023-06-06 18:58:13,204 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686077893178.20943744edffa484120f311b1d15800e. 2023-06-06 18:58:13,205 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686077893178.20943744edffa484120f311b1d15800e. 2023-06-06 18:58:13,205 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686077893178.20943744edffa484120f311b1d15800e. after waiting 0 ms 2023-06-06 18:58:13,205 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686077893178.20943744edffa484120f311b1d15800e. 2023-06-06 18:58:13,205 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686077893178.20943744edffa484120f311b1d15800e. 2023-06-06 18:58:13,205 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 20943744edffa484120f311b1d15800e: 2023-06-06 18:58:13,208 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-06 18:58:13,209 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1686077893178.20943744edffa484120f311b1d15800e.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686077893209"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686077893209"}]},"ts":"1686077893209"} 2023-06-06 18:58:13,212 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-06 18:58:13,213 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-06 18:58:13,213 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686077893213"}]},"ts":"1686077893213"} 2023-06-06 18:58:13,214 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-06 18:58:13,220 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=20943744edffa484120f311b1d15800e, ASSIGN}] 2023-06-06 18:58:13,222 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=20943744edffa484120f311b1d15800e, ASSIGN 2023-06-06 18:58:13,223 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=20943744edffa484120f311b1d15800e, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44481,1686077892598; forceNewPlan=false, retain=false 2023-06-06 18:58:13,375 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=20943744edffa484120f311b1d15800e, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44481,1686077892598 2023-06-06 18:58:13,376 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1686077893178.20943744edffa484120f311b1d15800e.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686077893375"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686077893375"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686077893375"}]},"ts":"1686077893375"} 2023-06-06 18:58:13,385 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 20943744edffa484120f311b1d15800e, server=jenkins-hbase20.apache.org,44481,1686077892598}] 2023-06-06 18:58:13,548 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1686077893178.20943744edffa484120f311b1d15800e. 2023-06-06 18:58:13,548 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 20943744edffa484120f311b1d15800e, NAME => 'hbase:namespace,,1686077893178.20943744edffa484120f311b1d15800e.', STARTKEY => '', ENDKEY => ''} 2023-06-06 18:58:13,549 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 20943744edffa484120f311b1d15800e 2023-06-06 18:58:13,549 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686077893178.20943744edffa484120f311b1d15800e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-06 18:58:13,549 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 20943744edffa484120f311b1d15800e 2023-06-06 18:58:13,550 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 20943744edffa484120f311b1d15800e 2023-06-06 18:58:13,552 INFO [StoreOpener-20943744edffa484120f311b1d15800e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 20943744edffa484120f311b1d15800e 2023-06-06 18:58:13,553 DEBUG [StoreOpener-20943744edffa484120f311b1d15800e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/namespace/20943744edffa484120f311b1d15800e/info 2023-06-06 18:58:13,553 DEBUG [StoreOpener-20943744edffa484120f311b1d15800e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/namespace/20943744edffa484120f311b1d15800e/info 2023-06-06 18:58:13,554 INFO [StoreOpener-20943744edffa484120f311b1d15800e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 20943744edffa484120f311b1d15800e columnFamilyName info 2023-06-06 18:58:13,555 INFO [StoreOpener-20943744edffa484120f311b1d15800e-1] regionserver.HStore(310): Store=20943744edffa484120f311b1d15800e/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-06 18:58:13,556 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/namespace/20943744edffa484120f311b1d15800e 2023-06-06 18:58:13,556 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/namespace/20943744edffa484120f311b1d15800e 2023-06-06 18:58:13,560 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 20943744edffa484120f311b1d15800e 2023-06-06 18:58:13,563 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/namespace/20943744edffa484120f311b1d15800e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-06 18:58:13,564 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 20943744edffa484120f311b1d15800e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=785601, jitterRate=-0.0010567307472229004}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-06 18:58:13,564 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 20943744edffa484120f311b1d15800e: 2023-06-06 18:58:13,566 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1686077893178.20943744edffa484120f311b1d15800e., pid=6, masterSystemTime=1686077893540 2023-06-06 18:58:13,568 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1686077893178.20943744edffa484120f311b1d15800e. 2023-06-06 18:58:13,568 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1686077893178.20943744edffa484120f311b1d15800e. 2023-06-06 18:58:13,569 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=20943744edffa484120f311b1d15800e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44481,1686077892598 2023-06-06 18:58:13,569 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1686077893178.20943744edffa484120f311b1d15800e.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686077893569"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686077893569"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686077893569"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686077893569"}]},"ts":"1686077893569"} 2023-06-06 18:58:13,573 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-06 18:58:13,573 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 20943744edffa484120f311b1d15800e, server=jenkins-hbase20.apache.org,44481,1686077892598 in 186 msec 2023-06-06 18:58:13,575 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-06 18:58:13,575 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=20943744edffa484120f311b1d15800e, ASSIGN in 355 msec 2023-06-06 18:58:13,575 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-06 18:58:13,575 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686077893575"}]},"ts":"1686077893575"} 2023-06-06 18:58:13,577 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-06 18:58:13,578 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-06 18:58:13,580 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 400 msec 2023-06-06 18:58:13,582 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-06 18:58:13,583 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-06 18:58:13,583 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:58:13,587 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-06 18:58:13,598 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-06 18:58:13,603 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 15 msec 2023-06-06 18:58:13,609 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-06 18:58:13,618 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-06 18:58:13,625 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 15 msec 2023-06-06 18:58:13,637 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-06 18:58:13,640 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-06 18:58:13,640 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.003sec 2023-06-06 18:58:13,640 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-06 18:58:13,640 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-06 18:58:13,640 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-06 18:58:13,640 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,34111,1686077892560-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-06 18:58:13,641 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,34111,1686077892560-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-06 18:58:13,644 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-06 18:58:13,722 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ReadOnlyZKClient(139): Connect 0x75d967be to 127.0.0.1:55318 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-06 18:58:13,730 DEBUG [Listener at localhost.localdomain/37875] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7729b970, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-06 18:58:13,733 DEBUG [hconnection-0x12591073-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-06 18:58:13,735 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:47732, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-06 18:58:13,736 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,34111,1686077892560 2023-06-06 18:58:13,736 INFO [Listener at localhost.localdomain/37875] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-06 18:58:13,741 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-06 18:58:13,741 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:58:13,741 INFO [Listener at localhost.localdomain/37875] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-06 18:58:13,741 INFO [Listener at localhost.localdomain/37875] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-06 18:58:13,743 INFO [Listener at localhost.localdomain/37875] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=test.com%2C8080%2C1, suffix=, logDir=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/WALs/test.com,8080,1, archiveDir=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/oldWALs, maxLogs=32 2023-06-06 18:58:13,748 INFO [Listener at localhost.localdomain/37875] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/WALs/test.com,8080,1/test.com%2C8080%2C1.1686077893744 2023-06-06 18:58:13,748 DEBUG [Listener at localhost.localdomain/37875] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42895,DS-a699bf78-4ac9-4790-9e88-146e2dec87b6,DISK], DatanodeInfoWithStorage[127.0.0.1:44849,DS-c63673dd-d850-4d92-a6c5-d5783beb9c26,DISK]] 2023-06-06 18:58:13,753 INFO [Listener at localhost.localdomain/37875] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/WALs/test.com,8080,1/test.com%2C8080%2C1.1686077893744 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/WALs/test.com,8080,1/test.com%2C8080%2C1.1686077893748 2023-06-06 18:58:13,753 DEBUG [Listener at localhost.localdomain/37875] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44849,DS-c63673dd-d850-4d92-a6c5-d5783beb9c26,DISK], DatanodeInfoWithStorage[127.0.0.1:42895,DS-a699bf78-4ac9-4790-9e88-146e2dec87b6,DISK]] 2023-06-06 18:58:13,754 DEBUG [Listener at localhost.localdomain/37875] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/WALs/test.com,8080,1/test.com%2C8080%2C1.1686077893744 is not closed yet, will try archiving it next time 2023-06-06 18:58:13,754 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/WALs/test.com,8080,1 2023-06-06 18:58:13,762 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/WALs/test.com,8080,1/test.com%2C8080%2C1.1686077893744 to hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/oldWALs/test.com%2C8080%2C1.1686077893744 2023-06-06 18:58:13,765 DEBUG [Listener at localhost.localdomain/37875] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/oldWALs 2023-06-06 18:58:13,765 INFO [Listener at localhost.localdomain/37875] wal.AbstractFSWAL(1031): Closed WAL: FSHLog test.com%2C8080%2C1:(num 1686077893748) 2023-06-06 18:58:13,765 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-06 18:58:13,765 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x75d967be to 127.0.0.1:55318 2023-06-06 18:58:13,765 DEBUG [Listener at localhost.localdomain/37875] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:58:13,767 DEBUG [Listener at localhost.localdomain/37875] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-06 18:58:13,767 DEBUG [Listener at localhost.localdomain/37875] util.JVMClusterUtil(257): Found active master hash=1383999605, stopped=false 2023-06-06 18:58:13,767 INFO [Listener at localhost.localdomain/37875] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,34111,1686077892560 2023-06-06 18:58:13,768 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-06 18:58:13,768 INFO [Listener at localhost.localdomain/37875] procedure2.ProcedureExecutor(629): Stopping 2023-06-06 18:58:13,769 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:58:13,768 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:44481-0x101c1c8ffe30001, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-06 18:58:13,770 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1835cfad to 127.0.0.1:55318 2023-06-06 18:58:13,770 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:58:13,770 DEBUG [Listener at localhost.localdomain/37875] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:58:13,770 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44481-0x101c1c8ffe30001, quorum=127.0.0.1:55318, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-06 18:58:13,771 INFO [Listener at localhost.localdomain/37875] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,44481,1686077892598' ***** 2023-06-06 18:58:13,771 INFO [Listener at localhost.localdomain/37875] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-06 18:58:13,771 INFO [RS:0;jenkins-hbase20:44481] regionserver.HeapMemoryManager(220): Stopping 2023-06-06 18:58:13,771 INFO [RS:0;jenkins-hbase20:44481] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-06 18:58:13,771 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-06 18:58:13,771 INFO [RS:0;jenkins-hbase20:44481] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-06 18:58:13,772 INFO [RS:0;jenkins-hbase20:44481] regionserver.HRegionServer(3303): Received CLOSE for 20943744edffa484120f311b1d15800e 2023-06-06 18:58:13,772 INFO [RS:0;jenkins-hbase20:44481] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,44481,1686077892598 2023-06-06 18:58:13,772 DEBUG [RS:0;jenkins-hbase20:44481] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x00f35e61 to 127.0.0.1:55318 2023-06-06 18:58:13,772 DEBUG [RS:0;jenkins-hbase20:44481] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:58:13,773 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 20943744edffa484120f311b1d15800e, disabling compactions & flushes 2023-06-06 18:58:13,773 INFO [RS:0;jenkins-hbase20:44481] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-06 18:58:13,773 INFO [RS:0;jenkins-hbase20:44481] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-06 18:58:13,773 INFO [RS:0;jenkins-hbase20:44481] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-06 18:58:13,773 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686077893178.20943744edffa484120f311b1d15800e. 2023-06-06 18:58:13,773 INFO [RS:0;jenkins-hbase20:44481] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-06 18:58:13,773 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686077893178.20943744edffa484120f311b1d15800e. 2023-06-06 18:58:13,773 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686077893178.20943744edffa484120f311b1d15800e. after waiting 0 ms 2023-06-06 18:58:13,773 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686077893178.20943744edffa484120f311b1d15800e. 2023-06-06 18:58:13,773 INFO [RS:0;jenkins-hbase20:44481] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-06-06 18:58:13,773 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 20943744edffa484120f311b1d15800e 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-06 18:58:13,773 DEBUG [RS:0;jenkins-hbase20:44481] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 20943744edffa484120f311b1d15800e=hbase:namespace,,1686077893178.20943744edffa484120f311b1d15800e.} 2023-06-06 18:58:13,773 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-06 18:58:13,774 DEBUG [RS:0;jenkins-hbase20:44481] regionserver.HRegionServer(1504): Waiting on 1588230740, 20943744edffa484120f311b1d15800e 2023-06-06 18:58:13,774 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-06 18:58:13,774 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-06 18:58:13,774 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-06 18:58:13,774 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-06 18:58:13,774 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=1.26 KB heapSize=2.89 KB 2023-06-06 18:58:13,786 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.17 KB at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/1588230740/.tmp/info/317f472c156a4c10976afb18c9b7c3d4 2023-06-06 18:58:13,786 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/namespace/20943744edffa484120f311b1d15800e/.tmp/info/0f7a06428a7a49f3a9544ff849e7e165 2023-06-06 18:58:13,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/namespace/20943744edffa484120f311b1d15800e/.tmp/info/0f7a06428a7a49f3a9544ff849e7e165 as hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/namespace/20943744edffa484120f311b1d15800e/info/0f7a06428a7a49f3a9544ff849e7e165 2023-06-06 18:58:13,804 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/namespace/20943744edffa484120f311b1d15800e/info/0f7a06428a7a49f3a9544ff849e7e165, entries=2, sequenceid=6, filesize=4.8 K 2023-06-06 18:58:13,805 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 20943744edffa484120f311b1d15800e in 32ms, sequenceid=6, compaction requested=false 2023-06-06 18:58:13,809 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=94 B at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/1588230740/.tmp/table/8fe1707f567445f2942870fc64c5dd6e 2023-06-06 18:58:13,811 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/namespace/20943744edffa484120f311b1d15800e/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-06 18:58:13,811 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686077893178.20943744edffa484120f311b1d15800e. 2023-06-06 18:58:13,811 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 20943744edffa484120f311b1d15800e: 2023-06-06 18:58:13,812 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1686077893178.20943744edffa484120f311b1d15800e. 2023-06-06 18:58:13,815 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/1588230740/.tmp/info/317f472c156a4c10976afb18c9b7c3d4 as hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/1588230740/info/317f472c156a4c10976afb18c9b7c3d4 2023-06-06 18:58:13,819 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/1588230740/info/317f472c156a4c10976afb18c9b7c3d4, entries=10, sequenceid=9, filesize=5.9 K 2023-06-06 18:58:13,820 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/1588230740/.tmp/table/8fe1707f567445f2942870fc64c5dd6e as hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/1588230740/table/8fe1707f567445f2942870fc64c5dd6e 2023-06-06 18:58:13,826 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/1588230740/table/8fe1707f567445f2942870fc64c5dd6e, entries=2, sequenceid=9, filesize=4.7 K 2023-06-06 18:58:13,827 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.26 KB/1292, heapSize ~2.61 KB/2672, currentSize=0 B/0 for 1588230740 in 53ms, sequenceid=9, compaction requested=false 2023-06-06 18:58:13,840 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/data/hbase/meta/1588230740/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-06-06 18:58:13,841 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-06 18:58:13,841 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-06 18:58:13,841 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-06 18:58:13,841 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-06 18:58:13,868 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-06-06 18:58:13,868 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-06-06 18:58:13,974 INFO [RS:0;jenkins-hbase20:44481] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,44481,1686077892598; all regions closed. 2023-06-06 18:58:13,974 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/WALs/jenkins-hbase20.apache.org,44481,1686077892598 2023-06-06 18:58:13,979 DEBUG [RS:0;jenkins-hbase20:44481] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/oldWALs 2023-06-06 18:58:13,979 INFO [RS:0;jenkins-hbase20:44481] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C44481%2C1686077892598.meta:.meta(num 1686077893121) 2023-06-06 18:58:13,979 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/WALs/jenkins-hbase20.apache.org,44481,1686077892598 2023-06-06 18:58:13,984 DEBUG [RS:0;jenkins-hbase20:44481] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/oldWALs 2023-06-06 18:58:13,984 INFO [RS:0;jenkins-hbase20:44481] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C44481%2C1686077892598:(num 1686077892991) 2023-06-06 18:58:13,984 DEBUG [RS:0;jenkins-hbase20:44481] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:58:13,984 INFO [RS:0;jenkins-hbase20:44481] regionserver.LeaseManager(133): Closed leases 2023-06-06 18:58:13,984 INFO [RS:0;jenkins-hbase20:44481] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-06 18:58:13,984 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-06 18:58:13,985 INFO [RS:0;jenkins-hbase20:44481] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:44481 2023-06-06 18:58:13,988 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:44481-0x101c1c8ffe30001, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,44481,1686077892598 2023-06-06 18:58:13,988 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-06 18:58:13,988 ERROR [Listener at localhost.localdomain/37875-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@7bf2a96a rejected from java.util.concurrent.ThreadPoolExecutor@7fcd22cc[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 4] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1374) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-06-06 18:58:13,988 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:44481-0x101c1c8ffe30001, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-06 18:58:13,989 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,44481,1686077892598] 2023-06-06 18:58:13,989 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,44481,1686077892598; numProcessing=1 2023-06-06 18:58:13,989 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,44481,1686077892598 already deleted, retry=false 2023-06-06 18:58:13,989 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,44481,1686077892598 expired; onlineServers=0 2023-06-06 18:58:13,989 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,34111,1686077892560' ***** 2023-06-06 18:58:13,990 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-06 18:58:13,990 DEBUG [M:0;jenkins-hbase20:34111] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5641a6e2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-06 18:58:13,990 INFO [M:0;jenkins-hbase20:34111] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,34111,1686077892560 2023-06-06 18:58:13,990 INFO [M:0;jenkins-hbase20:34111] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,34111,1686077892560; all regions closed. 2023-06-06 18:58:13,990 DEBUG [M:0;jenkins-hbase20:34111] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-06 18:58:13,990 DEBUG [M:0;jenkins-hbase20:34111] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-06 18:58:13,991 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-06 18:58:13,991 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686077892763] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686077892763,5,FailOnTimeoutGroup] 2023-06-06 18:58:13,991 DEBUG [M:0;jenkins-hbase20:34111] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-06 18:58:13,991 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686077892762] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686077892762,5,FailOnTimeoutGroup] 2023-06-06 18:58:13,992 INFO [M:0;jenkins-hbase20:34111] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-06 18:58:13,992 INFO [M:0;jenkins-hbase20:34111] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-06 18:58:13,992 INFO [M:0;jenkins-hbase20:34111] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-06-06 18:58:13,992 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-06 18:58:13,992 DEBUG [M:0;jenkins-hbase20:34111] master.HMaster(1512): Stopping service threads 2023-06-06 18:58:13,992 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-06 18:58:13,993 INFO [M:0;jenkins-hbase20:34111] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-06 18:58:13,993 ERROR [M:0;jenkins-hbase20:34111] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-11,5,PEWorkerGroup] 2023-06-06 18:58:13,993 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-06 18:58:13,993 INFO [M:0;jenkins-hbase20:34111] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-06 18:58:13,993 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-06 18:58:13,994 DEBUG [M:0;jenkins-hbase20:34111] zookeeper.ZKUtil(398): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-06 18:58:13,994 WARN [M:0;jenkins-hbase20:34111] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-06 18:58:13,994 INFO [M:0;jenkins-hbase20:34111] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-06 18:58:13,994 INFO [M:0;jenkins-hbase20:34111] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-06 18:58:13,995 DEBUG [M:0;jenkins-hbase20:34111] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-06 18:58:13,995 INFO [M:0;jenkins-hbase20:34111] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:58:13,995 DEBUG [M:0;jenkins-hbase20:34111] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:58:13,995 DEBUG [M:0;jenkins-hbase20:34111] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-06 18:58:13,995 DEBUG [M:0;jenkins-hbase20:34111] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:58:13,995 INFO [M:0;jenkins-hbase20:34111] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=24.09 KB heapSize=29.59 KB 2023-06-06 18:58:14,008 INFO [M:0;jenkins-hbase20:34111] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.09 KB at sequenceid=66 (bloomFilter=true), to=hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/412cc90fd7874a9aa1ad045356a63241 2023-06-06 18:58:14,012 DEBUG [M:0;jenkins-hbase20:34111] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/412cc90fd7874a9aa1ad045356a63241 as hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/412cc90fd7874a9aa1ad045356a63241 2023-06-06 18:58:14,016 INFO [M:0;jenkins-hbase20:34111] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41893/user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/412cc90fd7874a9aa1ad045356a63241, entries=8, sequenceid=66, filesize=6.3 K 2023-06-06 18:58:14,017 INFO [M:0;jenkins-hbase20:34111] regionserver.HRegion(2948): Finished flush of dataSize ~24.09 KB/24669, heapSize ~29.57 KB/30280, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 22ms, sequenceid=66, compaction requested=false 2023-06-06 18:58:14,018 INFO [M:0;jenkins-hbase20:34111] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-06 18:58:14,018 DEBUG [M:0;jenkins-hbase20:34111] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-06 18:58:14,019 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/f718b28d-dcd7-0972-3b44-9e4fedd4f1ef/MasterData/WALs/jenkins-hbase20.apache.org,34111,1686077892560 2023-06-06 18:58:14,021 INFO [M:0;jenkins-hbase20:34111] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-06 18:58:14,021 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-06 18:58:14,022 INFO [M:0;jenkins-hbase20:34111] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:34111 2023-06-06 18:58:14,024 DEBUG [M:0;jenkins-hbase20:34111] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,34111,1686077892560 already deleted, retry=false 2023-06-06 18:58:14,172 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:58:14,172 INFO [M:0;jenkins-hbase20:34111] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,34111,1686077892560; zookeeper connection closed. 2023-06-06 18:58:14,172 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:34111-0x101c1c8ffe30000, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:58:14,272 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:44481-0x101c1c8ffe30001, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:58:14,272 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:44481-0x101c1c8ffe30001, quorum=127.0.0.1:55318, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-06 18:58:14,272 INFO [RS:0;jenkins-hbase20:44481] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,44481,1686077892598; zookeeper connection closed. 2023-06-06 18:58:14,273 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@17cd24e3] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@17cd24e3 2023-06-06 18:58:14,273 INFO [Listener at localhost.localdomain/37875] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-06 18:58:14,274 WARN [Listener at localhost.localdomain/37875] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-06 18:58:14,280 INFO [Listener at localhost.localdomain/37875] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-06 18:58:14,394 WARN [BP-1613651888-148.251.75.209-1686077892098 heartbeating to localhost.localdomain/127.0.0.1:41893] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-06 18:58:14,395 WARN [BP-1613651888-148.251.75.209-1686077892098 heartbeating to localhost.localdomain/127.0.0.1:41893] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1613651888-148.251.75.209-1686077892098 (Datanode Uuid 2fe246dc-f9aa-4720-8e91-4e97118ccd9b) service to localhost.localdomain/127.0.0.1:41893 2023-06-06 18:58:14,397 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/cluster_4ae871bb-e9cf-5d10-79e9-8eb0b977ccfd/dfs/data/data3/current/BP-1613651888-148.251.75.209-1686077892098] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:58:14,398 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/cluster_4ae871bb-e9cf-5d10-79e9-8eb0b977ccfd/dfs/data/data4/current/BP-1613651888-148.251.75.209-1686077892098] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:58:14,399 WARN [Listener at localhost.localdomain/37875] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-06 18:58:14,401 INFO [Listener at localhost.localdomain/37875] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-06 18:58:14,505 WARN [BP-1613651888-148.251.75.209-1686077892098 heartbeating to localhost.localdomain/127.0.0.1:41893] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-06 18:58:14,506 WARN [BP-1613651888-148.251.75.209-1686077892098 heartbeating to localhost.localdomain/127.0.0.1:41893] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1613651888-148.251.75.209-1686077892098 (Datanode Uuid a29bcd60-23d6-4acc-a0d3-d541690f86bc) service to localhost.localdomain/127.0.0.1:41893 2023-06-06 18:58:14,507 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/cluster_4ae871bb-e9cf-5d10-79e9-8eb0b977ccfd/dfs/data/data1/current/BP-1613651888-148.251.75.209-1686077892098] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:58:14,508 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5f1a0fe1-2b0b-a051-7c3b-a9dd740dfa31/cluster_4ae871bb-e9cf-5d10-79e9-8eb0b977ccfd/dfs/data/data2/current/BP-1613651888-148.251.75.209-1686077892098] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-06 18:58:14,522 INFO [Listener at localhost.localdomain/37875] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-06-06 18:58:14,638 INFO [Listener at localhost.localdomain/37875] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-06 18:58:14,650 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-06 18:58:14,660 INFO [Listener at localhost.localdomain/37875] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=129 (was 105) - Thread LEAK? -, OpenFileDescriptor=567 (was 544) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=87 (was 77) - SystemLoadAverage LEAK? -, ProcessCount=165 (was 165), AvailableMemoryMB=4896 (was 4902)