2023-05-31 13:52:40,247 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651 2023-05-31 13:52:40,260 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.regionserver.wal.TestLogRolling timeout: 13 mins 2023-05-31 13:52:40,292 INFO [Time-limited test] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=10, OpenFileDescriptor=264, MaxFileDescriptor=60000, SystemLoadAverage=332, ProcessCount=170, AvailableMemoryMB=8962 2023-05-31 13:52:40,298 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-31 13:52:40,298 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/cluster_2341329b-fede-a3a9-ccbf-1fef1551413c, deleteOnExit=true 2023-05-31 13:52:40,298 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-31 13:52:40,299 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/test.cache.data in system properties and HBase conf 2023-05-31 13:52:40,300 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/hadoop.tmp.dir in system properties and HBase conf 2023-05-31 13:52:40,300 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/hadoop.log.dir in system properties and HBase conf 2023-05-31 13:52:40,301 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-31 13:52:40,301 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-31 13:52:40,302 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-31 13:52:40,407 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-05-31 13:52:40,771 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-31 13:52:40,776 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-31 13:52:40,777 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-31 13:52:40,777 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-31 13:52:40,778 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 13:52:40,778 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-31 13:52:40,779 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-31 13:52:40,779 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 13:52:40,779 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 13:52:40,780 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-31 13:52:40,780 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/nfs.dump.dir in system properties and HBase conf 2023-05-31 13:52:40,781 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/java.io.tmpdir in system properties and HBase conf 2023-05-31 13:52:40,781 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 13:52:40,781 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-31 13:52:40,782 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-31 13:52:41,298 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 13:52:41,310 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 13:52:41,315 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 13:52:41,568 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-05-31 13:52:41,734 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-05-31 13:52:41,751 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:52:41,786 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:52:41,815 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/java.io.tmpdir/Jetty_localhost_localdomain_36433_hdfs____.cf4ded/webapp 2023-05-31 13:52:42,000 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:36433 2023-05-31 13:52:42,007 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 13:52:42,009 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 13:52:42,009 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 13:52:42,409 WARN [Listener at localhost.localdomain/38351] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:52:42,462 WARN [Listener at localhost.localdomain/38351] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 13:52:42,477 WARN [Listener at localhost.localdomain/38351] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:52:42,482 INFO [Listener at localhost.localdomain/38351] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:52:42,486 INFO [Listener at localhost.localdomain/38351] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/java.io.tmpdir/Jetty_localhost_33401_datanode____.aq55ms/webapp 2023-05-31 13:52:42,568 INFO [Listener at localhost.localdomain/38351] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33401 2023-05-31 13:52:42,824 WARN [Listener at localhost.localdomain/40045] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:52:42,831 WARN [Listener at localhost.localdomain/40045] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 13:52:42,834 WARN [Listener at localhost.localdomain/40045] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:52:42,836 INFO [Listener at localhost.localdomain/40045] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:52:42,840 INFO [Listener at localhost.localdomain/40045] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/java.io.tmpdir/Jetty_localhost_44995_datanode____x7pblo/webapp 2023-05-31 13:52:42,916 INFO [Listener at localhost.localdomain/40045] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44995 2023-05-31 13:52:42,923 WARN [Listener at localhost.localdomain/42735] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:52:43,148 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x79a003b612fac507: Processing first storage report for DS-d81c9a75-f6eb-4bec-b1aa-d0a10c85359f from datanode 59a40338-8ac8-4d82-a585-66e88d5e6205 2023-05-31 13:52:43,150 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x79a003b612fac507: from storage DS-d81c9a75-f6eb-4bec-b1aa-d0a10c85359f node DatanodeRegistration(127.0.0.1:42031, datanodeUuid=59a40338-8ac8-4d82-a585-66e88d5e6205, infoPort=43029, infoSecurePort=0, ipcPort=40045, storageInfo=lv=-57;cid=testClusterID;nsid=799322755;c=1685541161380), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-31 13:52:43,150 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x59795fdbe588963a: Processing first storage report for DS-e550d69e-32e4-4963-9b5d-474463fe034b from datanode a9065701-352f-4810-b096-25b6f9d0ea2d 2023-05-31 13:52:43,150 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x59795fdbe588963a: from storage DS-e550d69e-32e4-4963-9b5d-474463fe034b node DatanodeRegistration(127.0.0.1:38643, datanodeUuid=a9065701-352f-4810-b096-25b6f9d0ea2d, infoPort=37319, infoSecurePort=0, ipcPort=42735, storageInfo=lv=-57;cid=testClusterID;nsid=799322755;c=1685541161380), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:52:43,150 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x79a003b612fac507: Processing first storage report for DS-3074dcca-c570-4f3a-ac0d-1b8b750b7b70 from datanode 59a40338-8ac8-4d82-a585-66e88d5e6205 2023-05-31 13:52:43,150 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x79a003b612fac507: from storage DS-3074dcca-c570-4f3a-ac0d-1b8b750b7b70 node DatanodeRegistration(127.0.0.1:42031, datanodeUuid=59a40338-8ac8-4d82-a585-66e88d5e6205, infoPort=43029, infoSecurePort=0, ipcPort=40045, storageInfo=lv=-57;cid=testClusterID;nsid=799322755;c=1685541161380), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:52:43,150 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x59795fdbe588963a: Processing first storage report for DS-2c10f172-0bbb-4f8a-9501-705d265d2499 from datanode a9065701-352f-4810-b096-25b6f9d0ea2d 2023-05-31 13:52:43,150 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x59795fdbe588963a: from storage DS-2c10f172-0bbb-4f8a-9501-705d265d2499 node DatanodeRegistration(127.0.0.1:38643, datanodeUuid=a9065701-352f-4810-b096-25b6f9d0ea2d, infoPort=37319, infoSecurePort=0, ipcPort=42735, storageInfo=lv=-57;cid=testClusterID;nsid=799322755;c=1685541161380), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:52:43,250 DEBUG [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651 2023-05-31 13:52:43,302 INFO [Listener at localhost.localdomain/42735] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/cluster_2341329b-fede-a3a9-ccbf-1fef1551413c/zookeeper_0, clientPort=53513, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/cluster_2341329b-fede-a3a9-ccbf-1fef1551413c/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/cluster_2341329b-fede-a3a9-ccbf-1fef1551413c/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-31 13:52:43,313 INFO [Listener at localhost.localdomain/42735] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=53513 2023-05-31 13:52:43,324 INFO [Listener at localhost.localdomain/42735] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:52:43,327 INFO [Listener at localhost.localdomain/42735] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:52:43,934 INFO [Listener at localhost.localdomain/42735] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5 with version=8 2023-05-31 13:52:43,934 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/hbase-staging 2023-05-31 13:52:44,178 INFO [Listener at localhost.localdomain/42735] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-05-31 13:52:44,534 INFO [Listener at localhost.localdomain/42735] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-05-31 13:52:44,558 INFO [Listener at localhost.localdomain/42735] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:52:44,559 INFO [Listener at localhost.localdomain/42735] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 13:52:44,559 INFO [Listener at localhost.localdomain/42735] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 13:52:44,559 INFO [Listener at localhost.localdomain/42735] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:52:44,559 INFO [Listener at localhost.localdomain/42735] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 13:52:44,672 INFO [Listener at localhost.localdomain/42735] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 13:52:44,734 DEBUG [Listener at localhost.localdomain/42735] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-05-31 13:52:44,809 INFO [Listener at localhost.localdomain/42735] ipc.NettyRpcServer(120): Bind to /136.243.18.41:39871 2023-05-31 13:52:44,818 INFO [Listener at localhost.localdomain/42735] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:52:44,820 INFO [Listener at localhost.localdomain/42735] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:52:44,837 INFO [Listener at localhost.localdomain/42735] zookeeper.RecoverableZooKeeper(93): Process identifier=master:39871 connecting to ZooKeeper ensemble=127.0.0.1:53513 2023-05-31 13:52:44,869 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:398710x0, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 13:52:44,871 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:39871-0x10081828c380000 connected 2023-05-31 13:52:44,892 DEBUG [Listener at localhost.localdomain/42735] zookeeper.ZKUtil(164): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 13:52:44,893 DEBUG [Listener at localhost.localdomain/42735] zookeeper.ZKUtil(164): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:52:44,897 DEBUG [Listener at localhost.localdomain/42735] zookeeper.ZKUtil(164): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 13:52:44,905 DEBUG [Listener at localhost.localdomain/42735] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39871 2023-05-31 13:52:44,905 DEBUG [Listener at localhost.localdomain/42735] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39871 2023-05-31 13:52:44,906 DEBUG [Listener at localhost.localdomain/42735] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39871 2023-05-31 13:52:44,906 DEBUG [Listener at localhost.localdomain/42735] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39871 2023-05-31 13:52:44,907 DEBUG [Listener at localhost.localdomain/42735] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39871 2023-05-31 13:52:44,912 INFO [Listener at localhost.localdomain/42735] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5, hbase.cluster.distributed=false 2023-05-31 13:52:44,969 INFO [Listener at localhost.localdomain/42735] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-05-31 13:52:44,969 INFO [Listener at localhost.localdomain/42735] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:52:44,970 INFO [Listener at localhost.localdomain/42735] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 13:52:44,970 INFO [Listener at localhost.localdomain/42735] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 13:52:44,970 INFO [Listener at localhost.localdomain/42735] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:52:44,970 INFO [Listener at localhost.localdomain/42735] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 13:52:44,974 INFO [Listener at localhost.localdomain/42735] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 13:52:44,977 INFO [Listener at localhost.localdomain/42735] ipc.NettyRpcServer(120): Bind to /136.243.18.41:40513 2023-05-31 13:52:44,978 INFO [Listener at localhost.localdomain/42735] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-31 13:52:44,983 DEBUG [Listener at localhost.localdomain/42735] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-31 13:52:44,984 INFO [Listener at localhost.localdomain/42735] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:52:44,986 INFO [Listener at localhost.localdomain/42735] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:52:44,987 INFO [Listener at localhost.localdomain/42735] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40513 connecting to ZooKeeper ensemble=127.0.0.1:53513 2023-05-31 13:52:44,991 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): regionserver:405130x0, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 13:52:44,992 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40513-0x10081828c380001 connected 2023-05-31 13:52:44,992 DEBUG [Listener at localhost.localdomain/42735] zookeeper.ZKUtil(164): regionserver:40513-0x10081828c380001, quorum=127.0.0.1:53513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 13:52:44,993 DEBUG [Listener at localhost.localdomain/42735] zookeeper.ZKUtil(164): regionserver:40513-0x10081828c380001, quorum=127.0.0.1:53513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:52:44,994 DEBUG [Listener at localhost.localdomain/42735] zookeeper.ZKUtil(164): regionserver:40513-0x10081828c380001, quorum=127.0.0.1:53513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 13:52:44,995 DEBUG [Listener at localhost.localdomain/42735] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40513 2023-05-31 13:52:44,995 DEBUG [Listener at localhost.localdomain/42735] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40513 2023-05-31 13:52:44,995 DEBUG [Listener at localhost.localdomain/42735] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40513 2023-05-31 13:52:44,996 DEBUG [Listener at localhost.localdomain/42735] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40513 2023-05-31 13:52:44,996 DEBUG [Listener at localhost.localdomain/42735] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40513 2023-05-31 13:52:44,998 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,39871,1685541164047 2023-05-31 13:52:45,007 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 13:52:45,008 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,39871,1685541164047 2023-05-31 13:52:45,027 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 13:52:45,027 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): regionserver:40513-0x10081828c380001, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 13:52:45,027 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:52:45,028 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 13:52:45,029 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,39871,1685541164047 from backup master directory 2023-05-31 13:52:45,029 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 13:52:45,031 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,39871,1685541164047 2023-05-31 13:52:45,031 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 13:52:45,032 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 13:52:45,032 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,39871,1685541164047 2023-05-31 13:52:45,034 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-05-31 13:52:45,035 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-05-31 13:52:45,118 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/hbase.id with ID: 081864d7-f23b-40a0-8203-d89294960c93 2023-05-31 13:52:45,167 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:52:45,181 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:52:45,220 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x33b83dbc to 127.0.0.1:53513 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 13:52:45,249 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6f8334fd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 13:52:45,268 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 13:52:45,269 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-31 13:52:45,277 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 13:52:45,303 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/MasterData/data/master/store-tmp 2023-05-31 13:52:45,334 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:52:45,335 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 13:52:45,335 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:52:45,335 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:52:45,335 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 13:52:45,335 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:52:45,335 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:52:45,335 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 13:52:45,337 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/MasterData/WALs/jenkins-hbase17.apache.org,39871,1685541164047 2023-05-31 13:52:45,358 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C39871%2C1685541164047, suffix=, logDir=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/MasterData/WALs/jenkins-hbase17.apache.org,39871,1685541164047, archiveDir=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/MasterData/oldWALs, maxLogs=10 2023-05-31 13:52:45,376 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] util.CommonFSUtils$DfsBuilderUtility(753): Could not find replicate method on builder; will not set replicate when creating output stream java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DistributedFileSystem$HdfsDataOutputStreamBuilder.replicate() at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.util.CommonFSUtils$DfsBuilderUtility.(CommonFSUtils.java:750) at org.apache.hadoop.hbase.util.CommonFSUtils.createForWal(CommonFSUtils.java:802) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.initOutput(ProtobufLogWriter.java:102) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:160) at org.apache.hadoop.hbase.wal.FSHLogProvider.createWriter(FSHLogProvider.java:78) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:307) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:881) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:574) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:515) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62) at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:295) at org.apache.hadoop.hbase.master.region.MasterRegion.createWAL(MasterRegion.java:200) at org.apache.hadoop.hbase.master.region.MasterRegion.bootstrap(MasterRegion.java:220) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:348) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:52:45,396 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/MasterData/WALs/jenkins-hbase17.apache.org,39871,1685541164047/jenkins-hbase17.apache.org%2C39871%2C1685541164047.1685541165374 2023-05-31 13:52:45,396 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38643,DS-e550d69e-32e4-4963-9b5d-474463fe034b,DISK], DatanodeInfoWithStorage[127.0.0.1:42031,DS-d81c9a75-f6eb-4bec-b1aa-d0a10c85359f,DISK]] 2023-05-31 13:52:45,397 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-31 13:52:45,397 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:52:45,400 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:52:45,401 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:52:45,451 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:52:45,459 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-31 13:52:45,478 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-31 13:52:45,491 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:52:45,496 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:52:45,498 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:52:45,511 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:52:45,515 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 13:52:45,516 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=698993, jitterRate=-0.11118458211421967}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 13:52:45,516 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 13:52:45,517 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-31 13:52:45,533 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-31 13:52:45,533 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-31 13:52:45,535 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-31 13:52:45,537 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-05-31 13:52:45,569 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 31 msec 2023-05-31 13:52:45,569 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-31 13:52:45,591 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-31 13:52:45,596 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-31 13:52:45,621 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-31 13:52:45,624 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-31 13:52:45,626 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-31 13:52:45,630 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-31 13:52:45,634 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-31 13:52:45,636 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:52:45,638 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-31 13:52:45,638 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-31 13:52:45,649 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-31 13:52:45,652 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): regionserver:40513-0x10081828c380001, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 13:52:45,653 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 13:52:45,653 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:52:45,653 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,39871,1685541164047, sessionid=0x10081828c380000, setting cluster-up flag (Was=false) 2023-05-31 13:52:45,664 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:52:45,668 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-31 13:52:45,669 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,39871,1685541164047 2023-05-31 13:52:45,673 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:52:45,676 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-31 13:52:45,677 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,39871,1685541164047 2023-05-31 13:52:45,679 WARN [master/jenkins-hbase17:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/.hbase-snapshot/.tmp 2023-05-31 13:52:45,700 INFO [RS:0;jenkins-hbase17:40513] regionserver.HRegionServer(951): ClusterId : 081864d7-f23b-40a0-8203-d89294960c93 2023-05-31 13:52:45,703 DEBUG [RS:0;jenkins-hbase17:40513] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-31 13:52:45,707 DEBUG [RS:0;jenkins-hbase17:40513] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-31 13:52:45,708 DEBUG [RS:0;jenkins-hbase17:40513] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-31 13:52:45,710 DEBUG [RS:0;jenkins-hbase17:40513] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-31 13:52:45,710 DEBUG [RS:0;jenkins-hbase17:40513] zookeeper.ReadOnlyZKClient(139): Connect 0x27a44e2d to 127.0.0.1:53513 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 13:52:45,714 DEBUG [RS:0;jenkins-hbase17:40513] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2963a4a9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 13:52:45,715 DEBUG [RS:0;jenkins-hbase17:40513] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2d5c255d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-05-31 13:52:45,734 DEBUG [RS:0;jenkins-hbase17:40513] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:40513 2023-05-31 13:52:45,738 INFO [RS:0;jenkins-hbase17:40513] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-31 13:52:45,738 INFO [RS:0;jenkins-hbase17:40513] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-31 13:52:45,738 DEBUG [RS:0;jenkins-hbase17:40513] regionserver.HRegionServer(1022): About to register with Master. 2023-05-31 13:52:45,741 INFO [RS:0;jenkins-hbase17:40513] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase17.apache.org,39871,1685541164047 with isa=jenkins-hbase17.apache.org/136.243.18.41:40513, startcode=1685541164969 2023-05-31 13:52:45,756 DEBUG [RS:0;jenkins-hbase17:40513] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-31 13:52:45,783 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-31 13:52:45,791 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-05-31 13:52:45,791 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-05-31 13:52:45,791 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-05-31 13:52:45,792 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-05-31 13:52:45,792 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-05-31 13:52:45,792 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:52:45,792 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-05-31 13:52:45,792 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:52:45,793 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685541195793 2023-05-31 13:52:45,795 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-31 13:52:45,798 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 13:52:45,799 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-31 13:52:45,804 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 13:52:45,807 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-31 13:52:45,813 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-31 13:52:45,814 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-31 13:52:45,814 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-31 13:52:45,815 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-31 13:52:45,815 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 13:52:45,817 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-31 13:52:45,819 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-31 13:52:45,819 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-31 13:52:45,823 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-31 13:52:45,823 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-31 13:52:45,826 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1685541165824,5,FailOnTimeoutGroup] 2023-05-31 13:52:45,829 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1685541165826,5,FailOnTimeoutGroup] 2023-05-31 13:52:45,829 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 13:52:45,829 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-31 13:52:45,830 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-31 13:52:45,831 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-31 13:52:45,846 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 13:52:45,847 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 13:52:45,847 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5 2023-05-31 13:52:45,872 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:52:45,875 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 13:52:45,879 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/1588230740/info 2023-05-31 13:52:45,880 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 13:52:45,881 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:52:45,882 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 13:52:45,885 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/1588230740/rep_barrier 2023-05-31 13:52:45,888 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 13:52:45,889 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:52:45,890 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 13:52:45,892 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/1588230740/table 2023-05-31 13:52:45,893 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 13:52:45,894 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:52:45,896 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/1588230740 2023-05-31 13:52:45,898 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:56391, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-05-31 13:52:45,898 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/1588230740 2023-05-31 13:52:45,901 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 13:52:45,903 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 13:52:45,907 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 13:52:45,908 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=817615, jitterRate=0.03965216875076294}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 13:52:45,908 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 13:52:45,908 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 13:52:45,908 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 13:52:45,908 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 13:52:45,908 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 13:52:45,909 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 13:52:45,909 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 13:52:45,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39871] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,40513,1685541164969 2023-05-31 13:52:45,910 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 13:52:45,914 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 13:52:45,914 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-31 13:52:45,922 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-31 13:52:45,926 DEBUG [RS:0;jenkins-hbase17:40513] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5 2023-05-31 13:52:45,926 DEBUG [RS:0;jenkins-hbase17:40513] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:38351 2023-05-31 13:52:45,926 DEBUG [RS:0;jenkins-hbase17:40513] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-31 13:52:45,930 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 13:52:45,931 DEBUG [RS:0;jenkins-hbase17:40513] zookeeper.ZKUtil(162): regionserver:40513-0x10081828c380001, quorum=127.0.0.1:53513, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40513,1685541164969 2023-05-31 13:52:45,932 WARN [RS:0;jenkins-hbase17:40513] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 13:52:45,932 INFO [RS:0;jenkins-hbase17:40513] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 13:52:45,932 DEBUG [RS:0;jenkins-hbase17:40513] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/WALs/jenkins-hbase17.apache.org,40513,1685541164969 2023-05-31 13:52:45,933 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,40513,1685541164969] 2023-05-31 13:52:45,934 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-31 13:52:45,938 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-31 13:52:45,943 DEBUG [RS:0;jenkins-hbase17:40513] zookeeper.ZKUtil(162): regionserver:40513-0x10081828c380001, quorum=127.0.0.1:53513, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40513,1685541164969 2023-05-31 13:52:45,951 DEBUG [RS:0;jenkins-hbase17:40513] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-31 13:52:45,959 INFO [RS:0;jenkins-hbase17:40513] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-31 13:52:45,975 INFO [RS:0;jenkins-hbase17:40513] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-31 13:52:45,977 INFO [RS:0;jenkins-hbase17:40513] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-31 13:52:45,978 INFO [RS:0;jenkins-hbase17:40513] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 13:52:45,978 INFO [RS:0;jenkins-hbase17:40513] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-31 13:52:45,984 INFO [RS:0;jenkins-hbase17:40513] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-31 13:52:45,984 DEBUG [RS:0;jenkins-hbase17:40513] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:52:45,984 DEBUG [RS:0;jenkins-hbase17:40513] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:52:45,985 DEBUG [RS:0;jenkins-hbase17:40513] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:52:45,985 DEBUG [RS:0;jenkins-hbase17:40513] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:52:45,985 DEBUG [RS:0;jenkins-hbase17:40513] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:52:45,985 DEBUG [RS:0;jenkins-hbase17:40513] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-05-31 13:52:45,985 DEBUG [RS:0;jenkins-hbase17:40513] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:52:45,985 DEBUG [RS:0;jenkins-hbase17:40513] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:52:45,985 DEBUG [RS:0;jenkins-hbase17:40513] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:52:45,985 DEBUG [RS:0;jenkins-hbase17:40513] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:52:45,986 INFO [RS:0;jenkins-hbase17:40513] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 13:52:45,986 INFO [RS:0;jenkins-hbase17:40513] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 13:52:45,987 INFO [RS:0;jenkins-hbase17:40513] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-31 13:52:45,998 INFO [RS:0;jenkins-hbase17:40513] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-31 13:52:46,000 INFO [RS:0;jenkins-hbase17:40513] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,40513,1685541164969-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 13:52:46,013 INFO [RS:0;jenkins-hbase17:40513] regionserver.Replication(203): jenkins-hbase17.apache.org,40513,1685541164969 started 2023-05-31 13:52:46,013 INFO [RS:0;jenkins-hbase17:40513] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,40513,1685541164969, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:40513, sessionid=0x10081828c380001 2023-05-31 13:52:46,013 DEBUG [RS:0;jenkins-hbase17:40513] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-31 13:52:46,013 DEBUG [RS:0;jenkins-hbase17:40513] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,40513,1685541164969 2023-05-31 13:52:46,013 DEBUG [RS:0;jenkins-hbase17:40513] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,40513,1685541164969' 2023-05-31 13:52:46,014 DEBUG [RS:0;jenkins-hbase17:40513] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 13:52:46,014 DEBUG [RS:0;jenkins-hbase17:40513] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 13:52:46,015 DEBUG [RS:0;jenkins-hbase17:40513] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-31 13:52:46,015 DEBUG [RS:0;jenkins-hbase17:40513] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-31 13:52:46,015 DEBUG [RS:0;jenkins-hbase17:40513] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,40513,1685541164969 2023-05-31 13:52:46,015 DEBUG [RS:0;jenkins-hbase17:40513] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,40513,1685541164969' 2023-05-31 13:52:46,015 DEBUG [RS:0;jenkins-hbase17:40513] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-31 13:52:46,015 DEBUG [RS:0;jenkins-hbase17:40513] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-31 13:52:46,016 DEBUG [RS:0;jenkins-hbase17:40513] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-31 13:52:46,016 INFO [RS:0;jenkins-hbase17:40513] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-31 13:52:46,016 INFO [RS:0;jenkins-hbase17:40513] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-31 13:52:46,091 DEBUG [jenkins-hbase17:39871] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-31 13:52:46,094 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,40513,1685541164969, state=OPENING 2023-05-31 13:52:46,102 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-31 13:52:46,103 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:52:46,104 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 13:52:46,107 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,40513,1685541164969}] 2023-05-31 13:52:46,127 INFO [RS:0;jenkins-hbase17:40513] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C40513%2C1685541164969, suffix=, logDir=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/WALs/jenkins-hbase17.apache.org,40513,1685541164969, archiveDir=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/oldWALs, maxLogs=32 2023-05-31 13:52:46,141 INFO [RS:0;jenkins-hbase17:40513] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/WALs/jenkins-hbase17.apache.org,40513,1685541164969/jenkins-hbase17.apache.org%2C40513%2C1685541164969.1685541166130 2023-05-31 13:52:46,141 DEBUG [RS:0;jenkins-hbase17:40513] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42031,DS-d81c9a75-f6eb-4bec-b1aa-d0a10c85359f,DISK], DatanodeInfoWithStorage[127.0.0.1:38643,DS-e550d69e-32e4-4963-9b5d-474463fe034b,DISK]] 2023-05-31 13:52:46,293 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,40513,1685541164969 2023-05-31 13:52:46,295 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-31 13:52:46,299 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:38778, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-31 13:52:46,313 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-31 13:52:46,314 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 13:52:46,317 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C40513%2C1685541164969.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/WALs/jenkins-hbase17.apache.org,40513,1685541164969, archiveDir=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/oldWALs, maxLogs=32 2023-05-31 13:52:46,330 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/WALs/jenkins-hbase17.apache.org,40513,1685541164969/jenkins-hbase17.apache.org%2C40513%2C1685541164969.meta.1685541166318.meta 2023-05-31 13:52:46,330 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38643,DS-e550d69e-32e4-4963-9b5d-474463fe034b,DISK], DatanodeInfoWithStorage[127.0.0.1:42031,DS-d81c9a75-f6eb-4bec-b1aa-d0a10c85359f,DISK]] 2023-05-31 13:52:46,330 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-31 13:52:46,332 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-31 13:52:46,347 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-31 13:52:46,351 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-31 13:52:46,355 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-31 13:52:46,355 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:52:46,355 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-31 13:52:46,355 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-31 13:52:46,358 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 13:52:46,360 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/1588230740/info 2023-05-31 13:52:46,360 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/1588230740/info 2023-05-31 13:52:46,361 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 13:52:46,361 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:52:46,362 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 13:52:46,364 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/1588230740/rep_barrier 2023-05-31 13:52:46,364 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/1588230740/rep_barrier 2023-05-31 13:52:46,365 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 13:52:46,366 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:52:46,366 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 13:52:46,367 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/1588230740/table 2023-05-31 13:52:46,368 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/1588230740/table 2023-05-31 13:52:46,368 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 13:52:46,369 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:52:46,372 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/1588230740 2023-05-31 13:52:46,375 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/1588230740 2023-05-31 13:52:46,378 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 13:52:46,380 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 13:52:46,382 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=800519, jitterRate=0.01791338622570038}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 13:52:46,382 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 13:52:46,393 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685541166288 2023-05-31 13:52:46,411 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-31 13:52:46,411 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-31 13:52:46,412 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,40513,1685541164969, state=OPEN 2023-05-31 13:52:46,414 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-31 13:52:46,414 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 13:52:46,420 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-31 13:52:46,420 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,40513,1685541164969 in 308 msec 2023-05-31 13:52:46,425 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-31 13:52:46,425 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 500 msec 2023-05-31 13:52:46,431 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 704 msec 2023-05-31 13:52:46,432 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685541166431, completionTime=-1 2023-05-31 13:52:46,432 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-31 13:52:46,432 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-31 13:52:46,488 DEBUG [hconnection-0x45b279a0-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 13:52:46,490 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:38792, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 13:52:46,506 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-31 13:52:46,506 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685541226506 2023-05-31 13:52:46,506 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685541286506 2023-05-31 13:52:46,506 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 73 msec 2023-05-31 13:52:46,525 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,39871,1685541164047-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 13:52:46,525 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,39871,1685541164047-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 13:52:46,525 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,39871,1685541164047-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 13:52:46,527 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:39871, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 13:52:46,527 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-31 13:52:46,532 DEBUG [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-31 13:52:46,538 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-31 13:52:46,539 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 13:52:46,547 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-31 13:52:46,550 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 13:52:46,554 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 13:52:46,574 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/.tmp/data/hbase/namespace/281e2e4dd3bc64b065bb9f295c0f6067 2023-05-31 13:52:46,576 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/.tmp/data/hbase/namespace/281e2e4dd3bc64b065bb9f295c0f6067 empty. 2023-05-31 13:52:46,577 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/.tmp/data/hbase/namespace/281e2e4dd3bc64b065bb9f295c0f6067 2023-05-31 13:52:46,577 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-31 13:52:46,631 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-31 13:52:46,634 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 281e2e4dd3bc64b065bb9f295c0f6067, NAME => 'hbase:namespace,,1685541166539.281e2e4dd3bc64b065bb9f295c0f6067.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/.tmp 2023-05-31 13:52:46,651 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685541166539.281e2e4dd3bc64b065bb9f295c0f6067.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:52:46,651 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 281e2e4dd3bc64b065bb9f295c0f6067, disabling compactions & flushes 2023-05-31 13:52:46,651 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685541166539.281e2e4dd3bc64b065bb9f295c0f6067. 2023-05-31 13:52:46,651 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685541166539.281e2e4dd3bc64b065bb9f295c0f6067. 2023-05-31 13:52:46,651 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685541166539.281e2e4dd3bc64b065bb9f295c0f6067. after waiting 0 ms 2023-05-31 13:52:46,651 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685541166539.281e2e4dd3bc64b065bb9f295c0f6067. 2023-05-31 13:52:46,651 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685541166539.281e2e4dd3bc64b065bb9f295c0f6067. 2023-05-31 13:52:46,651 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 281e2e4dd3bc64b065bb9f295c0f6067: 2023-05-31 13:52:46,656 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 13:52:46,670 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685541166539.281e2e4dd3bc64b065bb9f295c0f6067.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685541166658"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685541166658"}]},"ts":"1685541166658"} 2023-05-31 13:52:46,693 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 13:52:46,695 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 13:52:46,699 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685541166695"}]},"ts":"1685541166695"} 2023-05-31 13:52:46,703 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-31 13:52:46,712 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=281e2e4dd3bc64b065bb9f295c0f6067, ASSIGN}] 2023-05-31 13:52:46,715 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=281e2e4dd3bc64b065bb9f295c0f6067, ASSIGN 2023-05-31 13:52:46,717 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=281e2e4dd3bc64b065bb9f295c0f6067, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,40513,1685541164969; forceNewPlan=false, retain=false 2023-05-31 13:52:46,869 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=281e2e4dd3bc64b065bb9f295c0f6067, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,40513,1685541164969 2023-05-31 13:52:46,870 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685541166539.281e2e4dd3bc64b065bb9f295c0f6067.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685541166869"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685541166869"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685541166869"}]},"ts":"1685541166869"} 2023-05-31 13:52:46,879 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 281e2e4dd3bc64b065bb9f295c0f6067, server=jenkins-hbase17.apache.org,40513,1685541164969}] 2023-05-31 13:52:47,047 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685541166539.281e2e4dd3bc64b065bb9f295c0f6067. 2023-05-31 13:52:47,049 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 281e2e4dd3bc64b065bb9f295c0f6067, NAME => 'hbase:namespace,,1685541166539.281e2e4dd3bc64b065bb9f295c0f6067.', STARTKEY => '', ENDKEY => ''} 2023-05-31 13:52:47,051 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 281e2e4dd3bc64b065bb9f295c0f6067 2023-05-31 13:52:47,052 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685541166539.281e2e4dd3bc64b065bb9f295c0f6067.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:52:47,052 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 281e2e4dd3bc64b065bb9f295c0f6067 2023-05-31 13:52:47,052 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 281e2e4dd3bc64b065bb9f295c0f6067 2023-05-31 13:52:47,055 INFO [StoreOpener-281e2e4dd3bc64b065bb9f295c0f6067-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 281e2e4dd3bc64b065bb9f295c0f6067 2023-05-31 13:52:47,058 DEBUG [StoreOpener-281e2e4dd3bc64b065bb9f295c0f6067-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/namespace/281e2e4dd3bc64b065bb9f295c0f6067/info 2023-05-31 13:52:47,058 DEBUG [StoreOpener-281e2e4dd3bc64b065bb9f295c0f6067-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/namespace/281e2e4dd3bc64b065bb9f295c0f6067/info 2023-05-31 13:52:47,059 INFO [StoreOpener-281e2e4dd3bc64b065bb9f295c0f6067-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 281e2e4dd3bc64b065bb9f295c0f6067 columnFamilyName info 2023-05-31 13:52:47,060 INFO [StoreOpener-281e2e4dd3bc64b065bb9f295c0f6067-1] regionserver.HStore(310): Store=281e2e4dd3bc64b065bb9f295c0f6067/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:52:47,062 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/namespace/281e2e4dd3bc64b065bb9f295c0f6067 2023-05-31 13:52:47,063 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/namespace/281e2e4dd3bc64b065bb9f295c0f6067 2023-05-31 13:52:47,068 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 281e2e4dd3bc64b065bb9f295c0f6067 2023-05-31 13:52:47,071 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/namespace/281e2e4dd3bc64b065bb9f295c0f6067/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 13:52:47,072 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 281e2e4dd3bc64b065bb9f295c0f6067; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=874443, jitterRate=0.11191301047801971}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 13:52:47,072 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 281e2e4dd3bc64b065bb9f295c0f6067: 2023-05-31 13:52:47,075 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685541166539.281e2e4dd3bc64b065bb9f295c0f6067., pid=6, masterSystemTime=1685541167034 2023-05-31 13:52:47,079 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685541166539.281e2e4dd3bc64b065bb9f295c0f6067. 2023-05-31 13:52:47,079 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685541166539.281e2e4dd3bc64b065bb9f295c0f6067. 2023-05-31 13:52:47,080 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=281e2e4dd3bc64b065bb9f295c0f6067, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,40513,1685541164969 2023-05-31 13:52:47,081 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685541166539.281e2e4dd3bc64b065bb9f295c0f6067.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685541167079"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685541167079"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685541167079"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685541167079"}]},"ts":"1685541167079"} 2023-05-31 13:52:47,089 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-31 13:52:47,089 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 281e2e4dd3bc64b065bb9f295c0f6067, server=jenkins-hbase17.apache.org,40513,1685541164969 in 206 msec 2023-05-31 13:52:47,093 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-31 13:52:47,093 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=281e2e4dd3bc64b065bb9f295c0f6067, ASSIGN in 377 msec 2023-05-31 13:52:47,094 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 13:52:47,095 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685541167095"}]},"ts":"1685541167095"} 2023-05-31 13:52:47,100 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-31 13:52:47,104 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 13:52:47,107 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 564 msec 2023-05-31 13:52:47,151 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-31 13:52:47,152 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-31 13:52:47,153 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:52:47,194 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-31 13:52:47,212 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 13:52:47,218 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 34 msec 2023-05-31 13:52:47,228 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-31 13:52:47,241 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 13:52:47,245 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 16 msec 2023-05-31 13:52:47,254 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-31 13:52:47,255 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-31 13:52:47,256 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.223sec 2023-05-31 13:52:47,259 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-31 13:52:47,261 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-31 13:52:47,261 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-31 13:52:47,262 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,39871,1685541164047-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-31 13:52:47,263 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,39871,1685541164047-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-31 13:52:47,274 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-31 13:52:47,307 DEBUG [Listener at localhost.localdomain/42735] zookeeper.ReadOnlyZKClient(139): Connect 0x18808f04 to 127.0.0.1:53513 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 13:52:47,312 DEBUG [Listener at localhost.localdomain/42735] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@233e90d6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 13:52:47,324 DEBUG [hconnection-0x6039193-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 13:52:47,336 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:38800, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 13:52:47,344 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase17.apache.org,39871,1685541164047 2023-05-31 13:52:47,345 INFO [Listener at localhost.localdomain/42735] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:52:47,352 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-31 13:52:47,352 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:52:47,353 INFO [Listener at localhost.localdomain/42735] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-31 13:52:47,362 DEBUG [Listener at localhost.localdomain/42735] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-31 13:52:47,366 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:33902, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-31 13:52:47,376 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39871] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-31 13:52:47,376 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39871] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-31 13:52:47,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39871] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 13:52:47,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39871] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling 2023-05-31 13:52:47,385 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 13:52:47,387 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 13:52:47,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39871] master.MasterRpcServices(697): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testSlowSyncLogRolling" procId is: 9 2023-05-31 13:52:47,391 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb 2023-05-31 13:52:47,392 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb empty. 2023-05-31 13:52:47,395 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb 2023-05-31 13:52:47,395 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testSlowSyncLogRolling regions 2023-05-31 13:52:47,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39871] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 13:52:47,420 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/.tabledesc/.tableinfo.0000000001 2023-05-31 13:52:47,422 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => b298c71c35841ea506577279fa343fcb, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/.tmp 2023-05-31 13:52:47,439 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:52:47,439 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1604): Closing b298c71c35841ea506577279fa343fcb, disabling compactions & flushes 2023-05-31 13:52:47,439 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb. 2023-05-31 13:52:47,439 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb. 2023-05-31 13:52:47,439 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb. after waiting 0 ms 2023-05-31 13:52:47,439 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb. 2023-05-31 13:52:47,440 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb. 2023-05-31 13:52:47,440 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for b298c71c35841ea506577279fa343fcb: 2023-05-31 13:52:47,444 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 13:52:47,447 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685541167446"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685541167446"}]},"ts":"1685541167446"} 2023-05-31 13:52:47,450 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 13:52:47,451 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 13:52:47,451 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685541167451"}]},"ts":"1685541167451"} 2023-05-31 13:52:47,453 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLING in hbase:meta 2023-05-31 13:52:47,456 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=b298c71c35841ea506577279fa343fcb, ASSIGN}] 2023-05-31 13:52:47,458 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=b298c71c35841ea506577279fa343fcb, ASSIGN 2023-05-31 13:52:47,460 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=b298c71c35841ea506577279fa343fcb, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,40513,1685541164969; forceNewPlan=false, retain=false 2023-05-31 13:52:47,612 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=b298c71c35841ea506577279fa343fcb, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,40513,1685541164969 2023-05-31 13:52:47,613 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685541167612"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685541167612"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685541167612"}]},"ts":"1685541167612"} 2023-05-31 13:52:47,620 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure b298c71c35841ea506577279fa343fcb, server=jenkins-hbase17.apache.org,40513,1685541164969}] 2023-05-31 13:52:47,788 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb. 2023-05-31 13:52:47,789 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b298c71c35841ea506577279fa343fcb, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb.', STARTKEY => '', ENDKEY => ''} 2023-05-31 13:52:47,789 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testSlowSyncLogRolling b298c71c35841ea506577279fa343fcb 2023-05-31 13:52:47,790 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:52:47,790 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for b298c71c35841ea506577279fa343fcb 2023-05-31 13:52:47,790 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for b298c71c35841ea506577279fa343fcb 2023-05-31 13:52:47,792 INFO [StoreOpener-b298c71c35841ea506577279fa343fcb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region b298c71c35841ea506577279fa343fcb 2023-05-31 13:52:47,795 DEBUG [StoreOpener-b298c71c35841ea506577279fa343fcb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info 2023-05-31 13:52:47,795 DEBUG [StoreOpener-b298c71c35841ea506577279fa343fcb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info 2023-05-31 13:52:47,796 INFO [StoreOpener-b298c71c35841ea506577279fa343fcb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b298c71c35841ea506577279fa343fcb columnFamilyName info 2023-05-31 13:52:47,797 INFO [StoreOpener-b298c71c35841ea506577279fa343fcb-1] regionserver.HStore(310): Store=b298c71c35841ea506577279fa343fcb/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:52:47,799 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb 2023-05-31 13:52:47,801 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb 2023-05-31 13:52:47,805 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for b298c71c35841ea506577279fa343fcb 2023-05-31 13:52:47,808 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 13:52:47,809 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened b298c71c35841ea506577279fa343fcb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=778954, jitterRate=-0.009509265422821045}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 13:52:47,809 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for b298c71c35841ea506577279fa343fcb: 2023-05-31 13:52:47,810 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb., pid=11, masterSystemTime=1685541167775 2023-05-31 13:52:47,812 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb. 2023-05-31 13:52:47,813 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb. 2023-05-31 13:52:47,813 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=b298c71c35841ea506577279fa343fcb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,40513,1685541164969 2023-05-31 13:52:47,814 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685541167813"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685541167813"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685541167813"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685541167813"}]},"ts":"1685541167813"} 2023-05-31 13:52:47,820 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-31 13:52:47,820 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure b298c71c35841ea506577279fa343fcb, server=jenkins-hbase17.apache.org,40513,1685541164969 in 197 msec 2023-05-31 13:52:47,823 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-31 13:52:47,824 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=b298c71c35841ea506577279fa343fcb, ASSIGN in 364 msec 2023-05-31 13:52:47,825 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 13:52:47,825 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685541167825"}]},"ts":"1685541167825"} 2023-05-31 13:52:47,828 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLED in hbase:meta 2023-05-31 13:52:47,831 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 13:52:47,833 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling in 451 msec 2023-05-31 13:52:51,871 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-05-31 13:52:51,957 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-31 13:52:51,959 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-31 13:52:51,960 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testSlowSyncLogRolling' 2023-05-31 13:52:54,175 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-31 13:52:54,177 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-05-31 13:52:57,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39871] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 13:52:57,419 INFO [Listener at localhost.localdomain/42735] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testSlowSyncLogRolling, procId: 9 completed 2023-05-31 13:52:57,425 DEBUG [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testSlowSyncLogRolling 2023-05-31 13:52:57,427 DEBUG [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb. 2023-05-31 13:53:09,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40513] regionserver.HRegion(9158): Flush requested on b298c71c35841ea506577279fa343fcb 2023-05-31 13:53:09,484 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing b298c71c35841ea506577279fa343fcb 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 13:53:09,554 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/.tmp/info/34460460b89941f09e796394bf6619fc 2023-05-31 13:53:09,596 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/.tmp/info/34460460b89941f09e796394bf6619fc as hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/34460460b89941f09e796394bf6619fc 2023-05-31 13:53:09,607 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/34460460b89941f09e796394bf6619fc, entries=7, sequenceid=11, filesize=12.1 K 2023-05-31 13:53:09,609 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for b298c71c35841ea506577279fa343fcb in 126ms, sequenceid=11, compaction requested=false 2023-05-31 13:53:09,610 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for b298c71c35841ea506577279fa343fcb: 2023-05-31 13:53:17,708 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42031,DS-d81c9a75-f6eb-4bec-b1aa-d0a10c85359f,DISK], DatanodeInfoWithStorage[127.0.0.1:38643,DS-e550d69e-32e4-4963-9b5d-474463fe034b,DISK]] 2023-05-31 13:53:19,920 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 206 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42031,DS-d81c9a75-f6eb-4bec-b1aa-d0a10c85359f,DISK], DatanodeInfoWithStorage[127.0.0.1:38643,DS-e550d69e-32e4-4963-9b5d-474463fe034b,DISK]] 2023-05-31 13:53:22,127 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42031,DS-d81c9a75-f6eb-4bec-b1aa-d0a10c85359f,DISK], DatanodeInfoWithStorage[127.0.0.1:38643,DS-e550d69e-32e4-4963-9b5d-474463fe034b,DISK]] 2023-05-31 13:53:24,333 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42031,DS-d81c9a75-f6eb-4bec-b1aa-d0a10c85359f,DISK], DatanodeInfoWithStorage[127.0.0.1:38643,DS-e550d69e-32e4-4963-9b5d-474463fe034b,DISK]] 2023-05-31 13:53:24,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40513] regionserver.HRegion(9158): Flush requested on b298c71c35841ea506577279fa343fcb 2023-05-31 13:53:24,333 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing b298c71c35841ea506577279fa343fcb 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 13:53:24,536 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42031,DS-d81c9a75-f6eb-4bec-b1aa-d0a10c85359f,DISK], DatanodeInfoWithStorage[127.0.0.1:38643,DS-e550d69e-32e4-4963-9b5d-474463fe034b,DISK]] 2023-05-31 13:53:24,560 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=21 (bloomFilter=true), to=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/.tmp/info/48174dd14643469283dce4fae3375118 2023-05-31 13:53:24,575 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/.tmp/info/48174dd14643469283dce4fae3375118 as hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/48174dd14643469283dce4fae3375118 2023-05-31 13:53:24,583 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/48174dd14643469283dce4fae3375118, entries=7, sequenceid=21, filesize=12.1 K 2023-05-31 13:53:24,786 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42031,DS-d81c9a75-f6eb-4bec-b1aa-d0a10c85359f,DISK], DatanodeInfoWithStorage[127.0.0.1:38643,DS-e550d69e-32e4-4963-9b5d-474463fe034b,DISK]] 2023-05-31 13:53:24,787 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for b298c71c35841ea506577279fa343fcb in 453ms, sequenceid=21, compaction requested=false 2023-05-31 13:53:24,788 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for b298c71c35841ea506577279fa343fcb: 2023-05-31 13:53:24,788 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=24.2 K, sizeToCheck=16.0 K 2023-05-31 13:53:24,789 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 13:53:24,792 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/34460460b89941f09e796394bf6619fc because midkey is the same as first or last row 2023-05-31 13:53:26,539 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42031,DS-d81c9a75-f6eb-4bec-b1aa-d0a10c85359f,DISK], DatanodeInfoWithStorage[127.0.0.1:38643,DS-e550d69e-32e4-4963-9b5d-474463fe034b,DISK]] 2023-05-31 13:53:28,743 WARN [sync.4] wal.AbstractFSWAL(1302): Requesting log roll because we exceeded slow sync threshold; count=7, threshold=5, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42031,DS-d81c9a75-f6eb-4bec-b1aa-d0a10c85359f,DISK], DatanodeInfoWithStorage[127.0.0.1:38643,DS-e550d69e-32e4-4963-9b5d-474463fe034b,DISK]] 2023-05-31 13:53:28,745 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C40513%2C1685541164969:(num 1685541166130) roll requested 2023-05-31 13:53:28,745 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 203 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42031,DS-d81c9a75-f6eb-4bec-b1aa-d0a10c85359f,DISK], DatanodeInfoWithStorage[127.0.0.1:38643,DS-e550d69e-32e4-4963-9b5d-474463fe034b,DISK]] 2023-05-31 13:53:28,960 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42031,DS-d81c9a75-f6eb-4bec-b1aa-d0a10c85359f,DISK], DatanodeInfoWithStorage[127.0.0.1:38643,DS-e550d69e-32e4-4963-9b5d-474463fe034b,DISK]] 2023-05-31 13:53:28,963 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/WALs/jenkins-hbase17.apache.org,40513,1685541164969/jenkins-hbase17.apache.org%2C40513%2C1685541164969.1685541166130 with entries=24, filesize=20.43 KB; new WAL /user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/WALs/jenkins-hbase17.apache.org,40513,1685541164969/jenkins-hbase17.apache.org%2C40513%2C1685541164969.1685541208745 2023-05-31 13:53:28,965 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42031,DS-d81c9a75-f6eb-4bec-b1aa-d0a10c85359f,DISK], DatanodeInfoWithStorage[127.0.0.1:38643,DS-e550d69e-32e4-4963-9b5d-474463fe034b,DISK]] 2023-05-31 13:53:28,965 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/WALs/jenkins-hbase17.apache.org,40513,1685541164969/jenkins-hbase17.apache.org%2C40513%2C1685541164969.1685541166130 is not closed yet, will try archiving it next time 2023-05-31 13:53:38,759 INFO [Listener at localhost.localdomain/42735] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-05-31 13:53:43,761 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42031,DS-d81c9a75-f6eb-4bec-b1aa-d0a10c85359f,DISK], DatanodeInfoWithStorage[127.0.0.1:38643,DS-e550d69e-32e4-4963-9b5d-474463fe034b,DISK]] 2023-05-31 13:53:43,762 WARN [sync.0] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42031,DS-d81c9a75-f6eb-4bec-b1aa-d0a10c85359f,DISK], DatanodeInfoWithStorage[127.0.0.1:38643,DS-e550d69e-32e4-4963-9b5d-474463fe034b,DISK]] 2023-05-31 13:53:43,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40513] regionserver.HRegion(9158): Flush requested on b298c71c35841ea506577279fa343fcb 2023-05-31 13:53:43,762 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C40513%2C1685541164969:(num 1685541208745) roll requested 2023-05-31 13:53:43,762 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing b298c71c35841ea506577279fa343fcb 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 13:53:45,763 INFO [Listener at localhost.localdomain/42735] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-05-31 13:53:48,765 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 5002 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42031,DS-d81c9a75-f6eb-4bec-b1aa-d0a10c85359f,DISK], DatanodeInfoWithStorage[127.0.0.1:38643,DS-e550d69e-32e4-4963-9b5d-474463fe034b,DISK]] 2023-05-31 13:53:48,766 WARN [sync.1] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5002 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42031,DS-d81c9a75-f6eb-4bec-b1aa-d0a10c85359f,DISK], DatanodeInfoWithStorage[127.0.0.1:38643,DS-e550d69e-32e4-4963-9b5d-474463fe034b,DISK]] 2023-05-31 13:53:48,783 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42031,DS-d81c9a75-f6eb-4bec-b1aa-d0a10c85359f,DISK], DatanodeInfoWithStorage[127.0.0.1:38643,DS-e550d69e-32e4-4963-9b5d-474463fe034b,DISK]] 2023-05-31 13:53:48,784 WARN [sync.2] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42031,DS-d81c9a75-f6eb-4bec-b1aa-d0a10c85359f,DISK], DatanodeInfoWithStorage[127.0.0.1:38643,DS-e550d69e-32e4-4963-9b5d-474463fe034b,DISK]] 2023-05-31 13:53:48,785 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/WALs/jenkins-hbase17.apache.org,40513,1685541164969/jenkins-hbase17.apache.org%2C40513%2C1685541164969.1685541208745 with entries=6, filesize=6.07 KB; new WAL /user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/WALs/jenkins-hbase17.apache.org,40513,1685541164969/jenkins-hbase17.apache.org%2C40513%2C1685541164969.1685541223762 2023-05-31 13:53:48,785 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42031,DS-d81c9a75-f6eb-4bec-b1aa-d0a10c85359f,DISK], DatanodeInfoWithStorage[127.0.0.1:38643,DS-e550d69e-32e4-4963-9b5d-474463fe034b,DISK]] 2023-05-31 13:53:48,785 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/WALs/jenkins-hbase17.apache.org,40513,1685541164969/jenkins-hbase17.apache.org%2C40513%2C1685541164969.1685541208745 is not closed yet, will try archiving it next time 2023-05-31 13:53:48,787 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=31 (bloomFilter=true), to=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/.tmp/info/08f6cae2c23346f6b76ce256ee09669c 2023-05-31 13:53:48,798 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/.tmp/info/08f6cae2c23346f6b76ce256ee09669c as hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/08f6cae2c23346f6b76ce256ee09669c 2023-05-31 13:53:48,806 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/08f6cae2c23346f6b76ce256ee09669c, entries=7, sequenceid=31, filesize=12.1 K 2023-05-31 13:53:48,808 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for b298c71c35841ea506577279fa343fcb in 5046ms, sequenceid=31, compaction requested=true 2023-05-31 13:53:48,808 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for b298c71c35841ea506577279fa343fcb: 2023-05-31 13:53:48,808 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=36.3 K, sizeToCheck=16.0 K 2023-05-31 13:53:48,808 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 13:53:48,809 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/34460460b89941f09e796394bf6619fc because midkey is the same as first or last row 2023-05-31 13:53:48,810 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 13:53:48,810 DEBUG [RS:0;jenkins-hbase17:40513-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 13:53:48,815 DEBUG [RS:0;jenkins-hbase17:40513-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 37197 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 13:53:48,817 DEBUG [RS:0;jenkins-hbase17:40513-shortCompactions-0] regionserver.HStore(1912): b298c71c35841ea506577279fa343fcb/info is initiating minor compaction (all files) 2023-05-31 13:53:48,818 INFO [RS:0;jenkins-hbase17:40513-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of b298c71c35841ea506577279fa343fcb/info in TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb. 2023-05-31 13:53:48,818 INFO [RS:0;jenkins-hbase17:40513-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/34460460b89941f09e796394bf6619fc, hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/48174dd14643469283dce4fae3375118, hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/08f6cae2c23346f6b76ce256ee09669c] into tmpdir=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/.tmp, totalSize=36.3 K 2023-05-31 13:53:48,820 DEBUG [RS:0;jenkins-hbase17:40513-shortCompactions-0] compactions.Compactor(207): Compacting 34460460b89941f09e796394bf6619fc, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1685541177434 2023-05-31 13:53:48,820 DEBUG [RS:0;jenkins-hbase17:40513-shortCompactions-0] compactions.Compactor(207): Compacting 48174dd14643469283dce4fae3375118, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=21, earliestPutTs=1685541191486 2023-05-31 13:53:48,821 DEBUG [RS:0;jenkins-hbase17:40513-shortCompactions-0] compactions.Compactor(207): Compacting 08f6cae2c23346f6b76ce256ee09669c, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=31, earliestPutTs=1685541206336 2023-05-31 13:53:48,845 INFO [RS:0;jenkins-hbase17:40513-shortCompactions-0] throttle.PressureAwareThroughputController(145): b298c71c35841ea506577279fa343fcb#info#compaction#3 average throughput is 21.55 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 13:53:48,865 DEBUG [RS:0;jenkins-hbase17:40513-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/.tmp/info/9c0c0a481a3b487da538a368336b53dc as hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/9c0c0a481a3b487da538a368336b53dc 2023-05-31 13:53:48,885 INFO [RS:0;jenkins-hbase17:40513-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in b298c71c35841ea506577279fa343fcb/info of b298c71c35841ea506577279fa343fcb into 9c0c0a481a3b487da538a368336b53dc(size=27.0 K), total size for store is 27.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 13:53:48,885 DEBUG [RS:0;jenkins-hbase17:40513-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for b298c71c35841ea506577279fa343fcb: 2023-05-31 13:53:48,885 INFO [RS:0;jenkins-hbase17:40513-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb., storeName=b298c71c35841ea506577279fa343fcb/info, priority=13, startTime=1685541228810; duration=0sec 2023-05-31 13:53:48,886 DEBUG [RS:0;jenkins-hbase17:40513-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=27.0 K, sizeToCheck=16.0 K 2023-05-31 13:53:48,886 DEBUG [RS:0;jenkins-hbase17:40513-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 13:53:48,887 DEBUG [RS:0;jenkins-hbase17:40513-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/9c0c0a481a3b487da538a368336b53dc because midkey is the same as first or last row 2023-05-31 13:53:48,887 DEBUG [RS:0;jenkins-hbase17:40513-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 13:54:00,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40513] regionserver.HRegion(9158): Flush requested on b298c71c35841ea506577279fa343fcb 2023-05-31 13:54:00,887 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing b298c71c35841ea506577279fa343fcb 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 13:54:00,913 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=42 (bloomFilter=true), to=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/.tmp/info/f3822a5a81a14ac78830061873c51cb8 2023-05-31 13:54:00,923 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/.tmp/info/f3822a5a81a14ac78830061873c51cb8 as hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/f3822a5a81a14ac78830061873c51cb8 2023-05-31 13:54:00,936 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/f3822a5a81a14ac78830061873c51cb8, entries=7, sequenceid=42, filesize=12.1 K 2023-05-31 13:54:00,938 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for b298c71c35841ea506577279fa343fcb in 51ms, sequenceid=42, compaction requested=false 2023-05-31 13:54:00,939 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for b298c71c35841ea506577279fa343fcb: 2023-05-31 13:54:00,939 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=39.1 K, sizeToCheck=16.0 K 2023-05-31 13:54:00,939 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 13:54:00,939 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/9c0c0a481a3b487da538a368336b53dc because midkey is the same as first or last row 2023-05-31 13:54:08,898 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-31 13:54:08,898 INFO [Listener at localhost.localdomain/42735] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-31 13:54:08,899 DEBUG [Listener at localhost.localdomain/42735] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x18808f04 to 127.0.0.1:53513 2023-05-31 13:54:08,899 DEBUG [Listener at localhost.localdomain/42735] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:54:08,899 DEBUG [Listener at localhost.localdomain/42735] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-31 13:54:08,899 DEBUG [Listener at localhost.localdomain/42735] util.JVMClusterUtil(257): Found active master hash=395331719, stopped=false 2023-05-31 13:54:08,899 INFO [Listener at localhost.localdomain/42735] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,39871,1685541164047 2023-05-31 13:54:08,901 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): regionserver:40513-0x10081828c380001, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 13:54:08,901 INFO [Listener at localhost.localdomain/42735] procedure2.ProcedureExecutor(629): Stopping 2023-05-31 13:54:08,901 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 13:54:08,901 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:54:08,901 DEBUG [Listener at localhost.localdomain/42735] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x33b83dbc to 127.0.0.1:53513 2023-05-31 13:54:08,902 DEBUG [Listener at localhost.localdomain/42735] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:54:08,902 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40513-0x10081828c380001, quorum=127.0.0.1:53513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:54:08,902 INFO [Listener at localhost.localdomain/42735] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,40513,1685541164969' ***** 2023-05-31 13:54:08,902 INFO [Listener at localhost.localdomain/42735] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-31 13:54:08,903 INFO [RS:0;jenkins-hbase17:40513] regionserver.HeapMemoryManager(220): Stopping 2023-05-31 13:54:08,903 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:54:08,903 INFO [RS:0;jenkins-hbase17:40513] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-31 13:54:08,903 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-31 13:54:08,903 INFO [RS:0;jenkins-hbase17:40513] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-31 13:54:08,903 INFO [RS:0;jenkins-hbase17:40513] regionserver.HRegionServer(3303): Received CLOSE for b298c71c35841ea506577279fa343fcb 2023-05-31 13:54:08,904 INFO [RS:0;jenkins-hbase17:40513] regionserver.HRegionServer(3303): Received CLOSE for 281e2e4dd3bc64b065bb9f295c0f6067 2023-05-31 13:54:08,904 INFO [RS:0;jenkins-hbase17:40513] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,40513,1685541164969 2023-05-31 13:54:08,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing b298c71c35841ea506577279fa343fcb, disabling compactions & flushes 2023-05-31 13:54:08,904 DEBUG [RS:0;jenkins-hbase17:40513] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x27a44e2d to 127.0.0.1:53513 2023-05-31 13:54:08,904 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb. 2023-05-31 13:54:08,904 DEBUG [RS:0;jenkins-hbase17:40513] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:54:08,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb. 2023-05-31 13:54:08,905 INFO [RS:0;jenkins-hbase17:40513] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-31 13:54:08,905 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb. after waiting 0 ms 2023-05-31 13:54:08,905 INFO [RS:0;jenkins-hbase17:40513] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-31 13:54:08,905 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb. 2023-05-31 13:54:08,905 INFO [RS:0;jenkins-hbase17:40513] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-31 13:54:08,905 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing b298c71c35841ea506577279fa343fcb 1/1 column families, dataSize=3.15 KB heapSize=3.63 KB 2023-05-31 13:54:08,905 INFO [RS:0;jenkins-hbase17:40513] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-31 13:54:08,905 INFO [RS:0;jenkins-hbase17:40513] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-31 13:54:08,905 DEBUG [RS:0;jenkins-hbase17:40513] regionserver.HRegionServer(1478): Online Regions={b298c71c35841ea506577279fa343fcb=TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb., 281e2e4dd3bc64b065bb9f295c0f6067=hbase:namespace,,1685541166539.281e2e4dd3bc64b065bb9f295c0f6067., 1588230740=hbase:meta,,1.1588230740} 2023-05-31 13:54:08,906 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 13:54:08,906 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 13:54:08,906 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 13:54:08,906 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 13:54:08,906 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 13:54:08,906 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.87 KB heapSize=5.38 KB 2023-05-31 13:54:08,907 DEBUG [RS:0;jenkins-hbase17:40513] regionserver.HRegionServer(1504): Waiting on 1588230740, 281e2e4dd3bc64b065bb9f295c0f6067, b298c71c35841ea506577279fa343fcb 2023-05-31 13:54:08,935 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.15 KB at sequenceid=48 (bloomFilter=true), to=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/.tmp/info/e7ca100ad5ae496abcf1f33c7e65fd73 2023-05-31 13:54:08,936 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.64 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/1588230740/.tmp/info/ee805fbb49554ba3b0e15d40564b6784 2023-05-31 13:54:08,951 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/.tmp/info/e7ca100ad5ae496abcf1f33c7e65fd73 as hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/e7ca100ad5ae496abcf1f33c7e65fd73 2023-05-31 13:54:08,961 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/e7ca100ad5ae496abcf1f33c7e65fd73, entries=3, sequenceid=48, filesize=7.9 K 2023-05-31 13:54:08,966 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.15 KB/3228, heapSize ~3.61 KB/3696, currentSize=0 B/0 for b298c71c35841ea506577279fa343fcb in 61ms, sequenceid=48, compaction requested=true 2023-05-31 13:54:08,975 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/34460460b89941f09e796394bf6619fc, hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/48174dd14643469283dce4fae3375118, hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/08f6cae2c23346f6b76ce256ee09669c] to archive 2023-05-31 13:54:08,980 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-31 13:54:08,987 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-05-31 13:54:08,988 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/34460460b89941f09e796394bf6619fc to hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/archive/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/34460460b89941f09e796394bf6619fc 2023-05-31 13:54:08,992 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-05-31 13:54:08,998 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/48174dd14643469283dce4fae3375118 to hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/archive/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/48174dd14643469283dce4fae3375118 2023-05-31 13:54:08,998 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=232 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/1588230740/.tmp/table/c993f4b1da41495cbfeb431369d748c4 2023-05-31 13:54:09,000 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/08f6cae2c23346f6b76ce256ee09669c to hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/archive/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/info/08f6cae2c23346f6b76ce256ee09669c 2023-05-31 13:54:09,008 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/1588230740/.tmp/info/ee805fbb49554ba3b0e15d40564b6784 as hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/1588230740/info/ee805fbb49554ba3b0e15d40564b6784 2023-05-31 13:54:09,017 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/1588230740/info/ee805fbb49554ba3b0e15d40564b6784, entries=20, sequenceid=14, filesize=7.4 K 2023-05-31 13:54:09,018 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/1588230740/.tmp/table/c993f4b1da41495cbfeb431369d748c4 as hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/1588230740/table/c993f4b1da41495cbfeb431369d748c4 2023-05-31 13:54:09,029 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/1588230740/table/c993f4b1da41495cbfeb431369d748c4, entries=4, sequenceid=14, filesize=4.8 K 2023-05-31 13:54:09,030 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/default/TestLogRolling-testSlowSyncLogRolling/b298c71c35841ea506577279fa343fcb/recovered.edits/51.seqid, newMaxSeqId=51, maxSeqId=1 2023-05-31 13:54:09,031 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.87 KB/2938, heapSize ~5.09 KB/5216, currentSize=0 B/0 for 1588230740 in 125ms, sequenceid=14, compaction requested=false 2023-05-31 13:54:09,032 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb. 2023-05-31 13:54:09,032 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for b298c71c35841ea506577279fa343fcb: 2023-05-31 13:54:09,032 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testSlowSyncLogRolling,,1685541167376.b298c71c35841ea506577279fa343fcb. 2023-05-31 13:54:09,032 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 281e2e4dd3bc64b065bb9f295c0f6067, disabling compactions & flushes 2023-05-31 13:54:09,033 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685541166539.281e2e4dd3bc64b065bb9f295c0f6067. 2023-05-31 13:54:09,033 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685541166539.281e2e4dd3bc64b065bb9f295c0f6067. 2023-05-31 13:54:09,033 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685541166539.281e2e4dd3bc64b065bb9f295c0f6067. after waiting 0 ms 2023-05-31 13:54:09,036 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685541166539.281e2e4dd3bc64b065bb9f295c0f6067. 2023-05-31 13:54:09,036 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 281e2e4dd3bc64b065bb9f295c0f6067 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-31 13:54:09,043 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-05-31 13:54:09,044 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-31 13:54:09,046 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 13:54:09,046 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 13:54:09,046 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-31 13:54:09,054 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/namespace/281e2e4dd3bc64b065bb9f295c0f6067/.tmp/info/79d370c2548b4c44afdb34857910532a 2023-05-31 13:54:09,063 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/namespace/281e2e4dd3bc64b065bb9f295c0f6067/.tmp/info/79d370c2548b4c44afdb34857910532a as hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/namespace/281e2e4dd3bc64b065bb9f295c0f6067/info/79d370c2548b4c44afdb34857910532a 2023-05-31 13:54:09,071 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/namespace/281e2e4dd3bc64b065bb9f295c0f6067/info/79d370c2548b4c44afdb34857910532a, entries=2, sequenceid=6, filesize=4.8 K 2023-05-31 13:54:09,072 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 281e2e4dd3bc64b065bb9f295c0f6067 in 36ms, sequenceid=6, compaction requested=false 2023-05-31 13:54:09,080 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/data/hbase/namespace/281e2e4dd3bc64b065bb9f295c0f6067/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-31 13:54:09,082 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685541166539.281e2e4dd3bc64b065bb9f295c0f6067. 2023-05-31 13:54:09,082 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 281e2e4dd3bc64b065bb9f295c0f6067: 2023-05-31 13:54:09,082 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685541166539.281e2e4dd3bc64b065bb9f295c0f6067. 2023-05-31 13:54:09,107 INFO [RS:0;jenkins-hbase17:40513] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,40513,1685541164969; all regions closed. 2023-05-31 13:54:09,108 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/WALs/jenkins-hbase17.apache.org,40513,1685541164969 2023-05-31 13:54:09,116 DEBUG [RS:0;jenkins-hbase17:40513] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/oldWALs 2023-05-31 13:54:09,116 INFO [RS:0;jenkins-hbase17:40513] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase17.apache.org%2C40513%2C1685541164969.meta:.meta(num 1685541166318) 2023-05-31 13:54:09,117 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/WALs/jenkins-hbase17.apache.org,40513,1685541164969 2023-05-31 13:54:09,126 DEBUG [RS:0;jenkins-hbase17:40513] wal.AbstractFSWAL(1028): Moved 3 WAL file(s) to /user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/oldWALs 2023-05-31 13:54:09,127 INFO [RS:0;jenkins-hbase17:40513] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase17.apache.org%2C40513%2C1685541164969:(num 1685541223762) 2023-05-31 13:54:09,127 DEBUG [RS:0;jenkins-hbase17:40513] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:54:09,127 INFO [RS:0;jenkins-hbase17:40513] regionserver.LeaseManager(133): Closed leases 2023-05-31 13:54:09,127 INFO [RS:0;jenkins-hbase17:40513] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-31 13:54:09,127 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 13:54:09,128 INFO [RS:0;jenkins-hbase17:40513] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:40513 2023-05-31 13:54:09,133 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 13:54:09,133 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): regionserver:40513-0x10081828c380001, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,40513,1685541164969 2023-05-31 13:54:09,134 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): regionserver:40513-0x10081828c380001, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 13:54:09,134 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,40513,1685541164969] 2023-05-31 13:54:09,134 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,40513,1685541164969; numProcessing=1 2023-05-31 13:54:09,135 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,40513,1685541164969 already deleted, retry=false 2023-05-31 13:54:09,135 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,40513,1685541164969 expired; onlineServers=0 2023-05-31 13:54:09,135 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,39871,1685541164047' ***** 2023-05-31 13:54:09,135 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-31 13:54:09,136 DEBUG [M:0;jenkins-hbase17:39871] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@56ad42bc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-05-31 13:54:09,136 INFO [M:0;jenkins-hbase17:39871] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,39871,1685541164047 2023-05-31 13:54:09,136 INFO [M:0;jenkins-hbase17:39871] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,39871,1685541164047; all regions closed. 2023-05-31 13:54:09,136 DEBUG [M:0;jenkins-hbase17:39871] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:54:09,136 DEBUG [M:0;jenkins-hbase17:39871] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-31 13:54:09,136 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-31 13:54:09,136 DEBUG [M:0;jenkins-hbase17:39871] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-31 13:54:09,136 INFO [M:0;jenkins-hbase17:39871] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-31 13:54:09,136 INFO [M:0;jenkins-hbase17:39871] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-31 13:54:09,137 INFO [M:0;jenkins-hbase17:39871] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [] on shutdown 2023-05-31 13:54:09,136 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1685541165824] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1685541165824,5,FailOnTimeoutGroup] 2023-05-31 13:54:09,141 DEBUG [M:0;jenkins-hbase17:39871] master.HMaster(1512): Stopping service threads 2023-05-31 13:54:09,136 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1685541165826] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1685541165826,5,FailOnTimeoutGroup] 2023-05-31 13:54:09,141 INFO [M:0;jenkins-hbase17:39871] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-31 13:54:09,142 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-31 13:54:09,143 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:54:09,143 INFO [M:0;jenkins-hbase17:39871] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-31 13:54:09,143 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-31 13:54:09,143 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 13:54:09,143 DEBUG [M:0;jenkins-hbase17:39871] zookeeper.ZKUtil(398): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-31 13:54:09,143 WARN [M:0;jenkins-hbase17:39871] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-31 13:54:09,143 INFO [M:0;jenkins-hbase17:39871] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-31 13:54:09,144 INFO [M:0;jenkins-hbase17:39871] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-31 13:54:09,144 DEBUG [M:0;jenkins-hbase17:39871] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 13:54:09,144 INFO [M:0;jenkins-hbase17:39871] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:54:09,144 DEBUG [M:0;jenkins-hbase17:39871] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:54:09,144 DEBUG [M:0;jenkins-hbase17:39871] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 13:54:09,144 DEBUG [M:0;jenkins-hbase17:39871] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:54:09,144 INFO [M:0;jenkins-hbase17:39871] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.31 KB heapSize=46.76 KB 2023-05-31 13:54:09,161 INFO [M:0;jenkins-hbase17:39871] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.31 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/6cbe5d4f4a9d45d4ba0baec70cd9ff0c 2023-05-31 13:54:09,166 INFO [M:0;jenkins-hbase17:39871] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6cbe5d4f4a9d45d4ba0baec70cd9ff0c 2023-05-31 13:54:09,167 DEBUG [M:0;jenkins-hbase17:39871] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/6cbe5d4f4a9d45d4ba0baec70cd9ff0c as hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/6cbe5d4f4a9d45d4ba0baec70cd9ff0c 2023-05-31 13:54:09,174 INFO [M:0;jenkins-hbase17:39871] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6cbe5d4f4a9d45d4ba0baec70cd9ff0c 2023-05-31 13:54:09,174 INFO [M:0;jenkins-hbase17:39871] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/6cbe5d4f4a9d45d4ba0baec70cd9ff0c, entries=11, sequenceid=100, filesize=6.1 K 2023-05-31 13:54:09,175 INFO [M:0;jenkins-hbase17:39871] regionserver.HRegion(2948): Finished flush of dataSize ~38.31 KB/39234, heapSize ~46.74 KB/47864, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 31ms, sequenceid=100, compaction requested=false 2023-05-31 13:54:09,176 INFO [M:0;jenkins-hbase17:39871] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:54:09,177 DEBUG [M:0;jenkins-hbase17:39871] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 13:54:09,177 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/MasterData/WALs/jenkins-hbase17.apache.org,39871,1685541164047 2023-05-31 13:54:09,181 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 13:54:09,181 INFO [M:0;jenkins-hbase17:39871] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-31 13:54:09,182 INFO [M:0;jenkins-hbase17:39871] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:39871 2023-05-31 13:54:09,183 DEBUG [M:0;jenkins-hbase17:39871] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,39871,1685541164047 already deleted, retry=false 2023-05-31 13:54:09,235 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): regionserver:40513-0x10081828c380001, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:54:09,235 INFO [RS:0;jenkins-hbase17:40513] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,40513,1685541164969; zookeeper connection closed. 2023-05-31 13:54:09,235 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): regionserver:40513-0x10081828c380001, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:54:09,236 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@51e0d6bf] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@51e0d6bf 2023-05-31 13:54:09,236 INFO [Listener at localhost.localdomain/42735] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-31 13:54:09,336 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:54:09,337 DEBUG [Listener at localhost.localdomain/42735-EventThread] zookeeper.ZKWatcher(600): master:39871-0x10081828c380000, quorum=127.0.0.1:53513, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:54:09,337 INFO [M:0;jenkins-hbase17:39871] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,39871,1685541164047; zookeeper connection closed. 2023-05-31 13:54:09,338 WARN [Listener at localhost.localdomain/42735] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 13:54:09,342 INFO [Listener at localhost.localdomain/42735] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 13:54:09,446 WARN [BP-1478423288-136.243.18.41-1685541161380 heartbeating to localhost.localdomain/127.0.0.1:38351] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 13:54:09,446 WARN [BP-1478423288-136.243.18.41-1685541161380 heartbeating to localhost.localdomain/127.0.0.1:38351] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1478423288-136.243.18.41-1685541161380 (Datanode Uuid a9065701-352f-4810-b096-25b6f9d0ea2d) service to localhost.localdomain/127.0.0.1:38351 2023-05-31 13:54:09,448 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/cluster_2341329b-fede-a3a9-ccbf-1fef1551413c/dfs/data/data3/current/BP-1478423288-136.243.18.41-1685541161380] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:54:09,448 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/cluster_2341329b-fede-a3a9-ccbf-1fef1551413c/dfs/data/data4/current/BP-1478423288-136.243.18.41-1685541161380] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:54:09,449 WARN [Listener at localhost.localdomain/42735] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 13:54:09,451 INFO [Listener at localhost.localdomain/42735] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 13:54:09,554 WARN [BP-1478423288-136.243.18.41-1685541161380 heartbeating to localhost.localdomain/127.0.0.1:38351] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 13:54:09,554 WARN [BP-1478423288-136.243.18.41-1685541161380 heartbeating to localhost.localdomain/127.0.0.1:38351] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1478423288-136.243.18.41-1685541161380 (Datanode Uuid 59a40338-8ac8-4d82-a585-66e88d5e6205) service to localhost.localdomain/127.0.0.1:38351 2023-05-31 13:54:09,555 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/cluster_2341329b-fede-a3a9-ccbf-1fef1551413c/dfs/data/data1/current/BP-1478423288-136.243.18.41-1685541161380] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:54:09,555 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/cluster_2341329b-fede-a3a9-ccbf-1fef1551413c/dfs/data/data2/current/BP-1478423288-136.243.18.41-1685541161380] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:54:09,585 INFO [Listener at localhost.localdomain/42735] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-31 13:54:09,695 INFO [Listener at localhost.localdomain/42735] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-31 13:54:09,734 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-31 13:54:09,748 INFO [Listener at localhost.localdomain/42735] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=51 (was 10) Potentially hanging thread: nioEventLoopGroup-3-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (550425889) connection to localhost.localdomain/127.0.0.1:38351 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: regionserver/jenkins-hbase17:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Potentially hanging thread: HBase-Metrics2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase17:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3693) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/42735 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SessionTracker java.lang.Thread.sleep(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:151) Potentially hanging thread: master/jenkins-hbase17:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (550425889) connection to localhost.localdomain/127.0.0.1:38351 from jenkins.hfs.0 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Idle-Rpc-Conn-Sweeper-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-2 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SnapshotHandlerChoreCleaner sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.0@localhost.localdomain:38351 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (550425889) connection to localhost.localdomain/127.0.0.1:38351 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcClient-timer-pool-0 java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:600) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:496) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.PeerCache@4df05744 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase17:0.leaseChecker java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.regionserver.LeaseManager.run(LeaseManager.java:82) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:38351 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Monitor thread for TaskMonitor java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:327) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=439 (was 264) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=262 (was 332), ProcessCount=170 (was 170), AvailableMemoryMB=8299 (was 8962) 2023-05-31 13:54:09,758 INFO [Listener at localhost.localdomain/42735] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=52, OpenFileDescriptor=439, MaxFileDescriptor=60000, SystemLoadAverage=262, ProcessCount=170, AvailableMemoryMB=8298 2023-05-31 13:54:09,759 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-31 13:54:09,759 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/hadoop.log.dir so I do NOT create it in target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f 2023-05-31 13:54:09,759 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1867dd54-3b0b-b2a0-4c91-2d2239639651/hadoop.tmp.dir so I do NOT create it in target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f 2023-05-31 13:54:09,759 INFO [Listener at localhost.localdomain/42735] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261, deleteOnExit=true 2023-05-31 13:54:09,759 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-31 13:54:09,759 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/test.cache.data in system properties and HBase conf 2023-05-31 13:54:09,760 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/hadoop.tmp.dir in system properties and HBase conf 2023-05-31 13:54:09,760 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/hadoop.log.dir in system properties and HBase conf 2023-05-31 13:54:09,760 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-31 13:54:09,760 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-31 13:54:09,760 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-31 13:54:09,760 DEBUG [Listener at localhost.localdomain/42735] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-31 13:54:09,760 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-31 13:54:09,760 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-31 13:54:09,761 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-31 13:54:09,761 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 13:54:09,761 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-31 13:54:09,761 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-31 13:54:09,761 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 13:54:09,761 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 13:54:09,761 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-31 13:54:09,761 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/nfs.dump.dir in system properties and HBase conf 2023-05-31 13:54:09,761 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/java.io.tmpdir in system properties and HBase conf 2023-05-31 13:54:09,761 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 13:54:09,762 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-31 13:54:09,762 INFO [Listener at localhost.localdomain/42735] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-31 13:54:09,763 WARN [Listener at localhost.localdomain/42735] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 13:54:09,765 WARN [Listener at localhost.localdomain/42735] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 13:54:09,765 WARN [Listener at localhost.localdomain/42735] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 13:54:09,791 WARN [Listener at localhost.localdomain/42735] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:54:09,793 INFO [Listener at localhost.localdomain/42735] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:54:09,798 INFO [Listener at localhost.localdomain/42735] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/java.io.tmpdir/Jetty_localhost_localdomain_42541_hdfs____105e6m/webapp 2023-05-31 13:54:09,872 INFO [Listener at localhost.localdomain/42735] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:42541 2023-05-31 13:54:09,873 WARN [Listener at localhost.localdomain/42735] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 13:54:09,875 WARN [Listener at localhost.localdomain/42735] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 13:54:09,875 WARN [Listener at localhost.localdomain/42735] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 13:54:09,910 WARN [Listener at localhost.localdomain/34425] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:54:09,924 WARN [Listener at localhost.localdomain/34425] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 13:54:09,929 WARN [Listener at localhost.localdomain/34425] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:54:09,930 INFO [Listener at localhost.localdomain/34425] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:54:09,934 INFO [Listener at localhost.localdomain/34425] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/java.io.tmpdir/Jetty_localhost_35923_datanode____.eqydd/webapp 2023-05-31 13:54:09,991 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-31 13:54:10,030 INFO [Listener at localhost.localdomain/34425] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35923 2023-05-31 13:54:10,042 WARN [Listener at localhost.localdomain/33991] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:54:10,072 WARN [Listener at localhost.localdomain/33991] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 13:54:10,075 WARN [Listener at localhost.localdomain/33991] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:54:10,076 INFO [Listener at localhost.localdomain/33991] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:54:10,084 INFO [Listener at localhost.localdomain/33991] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/java.io.tmpdir/Jetty_localhost_37141_datanode____.aoetu7/webapp 2023-05-31 13:54:10,134 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3b74f3bf91fbf78: Processing first storage report for DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb from datanode b6ea9468-0ffe-4e08-a4bc-8d189d9940f2 2023-05-31 13:54:10,134 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3b74f3bf91fbf78: from storage DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb node DatanodeRegistration(127.0.0.1:45811, datanodeUuid=b6ea9468-0ffe-4e08-a4bc-8d189d9940f2, infoPort=43195, infoSecurePort=0, ipcPort=33991, storageInfo=lv=-57;cid=testClusterID;nsid=777904125;c=1685541249766), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:54:10,134 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3b74f3bf91fbf78: Processing first storage report for DS-3262130d-c29b-4540-bb9c-076d319f9fad from datanode b6ea9468-0ffe-4e08-a4bc-8d189d9940f2 2023-05-31 13:54:10,134 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3b74f3bf91fbf78: from storage DS-3262130d-c29b-4540-bb9c-076d319f9fad node DatanodeRegistration(127.0.0.1:45811, datanodeUuid=b6ea9468-0ffe-4e08-a4bc-8d189d9940f2, infoPort=43195, infoSecurePort=0, ipcPort=33991, storageInfo=lv=-57;cid=testClusterID;nsid=777904125;c=1685541249766), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:54:10,176 INFO [Listener at localhost.localdomain/33991] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37141 2023-05-31 13:54:10,186 WARN [Listener at localhost.localdomain/37517] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:54:10,270 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd0177d22da1e3b01: Processing first storage report for DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4 from datanode b3cbc322-f99d-488f-af11-788f393b0fa3 2023-05-31 13:54:10,270 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd0177d22da1e3b01: from storage DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4 node DatanodeRegistration(127.0.0.1:37969, datanodeUuid=b3cbc322-f99d-488f-af11-788f393b0fa3, infoPort=44809, infoSecurePort=0, ipcPort=37517, storageInfo=lv=-57;cid=testClusterID;nsid=777904125;c=1685541249766), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-31 13:54:10,270 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd0177d22da1e3b01: Processing first storage report for DS-4a95328c-caec-402b-8308-82d21079cb6e from datanode b3cbc322-f99d-488f-af11-788f393b0fa3 2023-05-31 13:54:10,270 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd0177d22da1e3b01: from storage DS-4a95328c-caec-402b-8308-82d21079cb6e node DatanodeRegistration(127.0.0.1:37969, datanodeUuid=b3cbc322-f99d-488f-af11-788f393b0fa3, infoPort=44809, infoSecurePort=0, ipcPort=37517, storageInfo=lv=-57;cid=testClusterID;nsid=777904125;c=1685541249766), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:54:10,306 DEBUG [Listener at localhost.localdomain/37517] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f 2023-05-31 13:54:10,313 INFO [Listener at localhost.localdomain/37517] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/zookeeper_0, clientPort=57632, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-31 13:54:10,315 INFO [Listener at localhost.localdomain/37517] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=57632 2023-05-31 13:54:10,315 INFO [Listener at localhost.localdomain/37517] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:54:10,317 INFO [Listener at localhost.localdomain/37517] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:54:10,346 INFO [Listener at localhost.localdomain/37517] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9 with version=8 2023-05-31 13:54:10,346 INFO [Listener at localhost.localdomain/37517] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/hbase-staging 2023-05-31 13:54:10,347 INFO [Listener at localhost.localdomain/37517] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-05-31 13:54:10,348 INFO [Listener at localhost.localdomain/37517] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:54:10,348 INFO [Listener at localhost.localdomain/37517] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 13:54:10,348 INFO [Listener at localhost.localdomain/37517] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 13:54:10,348 INFO [Listener at localhost.localdomain/37517] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:54:10,348 INFO [Listener at localhost.localdomain/37517] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 13:54:10,348 INFO [Listener at localhost.localdomain/37517] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 13:54:10,350 INFO [Listener at localhost.localdomain/37517] ipc.NettyRpcServer(120): Bind to /136.243.18.41:33819 2023-05-31 13:54:10,350 INFO [Listener at localhost.localdomain/37517] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:54:10,351 INFO [Listener at localhost.localdomain/37517] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:54:10,353 INFO [Listener at localhost.localdomain/37517] zookeeper.RecoverableZooKeeper(93): Process identifier=master:33819 connecting to ZooKeeper ensemble=127.0.0.1:57632 2023-05-31 13:54:10,364 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:338190x0, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 13:54:10,368 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:33819-0x1008183e0240000 connected 2023-05-31 13:54:10,402 DEBUG [Listener at localhost.localdomain/37517] zookeeper.ZKUtil(164): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 13:54:10,402 DEBUG [Listener at localhost.localdomain/37517] zookeeper.ZKUtil(164): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:54:10,403 DEBUG [Listener at localhost.localdomain/37517] zookeeper.ZKUtil(164): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 13:54:10,407 DEBUG [Listener at localhost.localdomain/37517] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33819 2023-05-31 13:54:10,408 DEBUG [Listener at localhost.localdomain/37517] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33819 2023-05-31 13:54:10,408 DEBUG [Listener at localhost.localdomain/37517] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33819 2023-05-31 13:54:10,408 DEBUG [Listener at localhost.localdomain/37517] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33819 2023-05-31 13:54:10,412 DEBUG [Listener at localhost.localdomain/37517] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33819 2023-05-31 13:54:10,412 INFO [Listener at localhost.localdomain/37517] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9, hbase.cluster.distributed=false 2023-05-31 13:54:10,425 INFO [Listener at localhost.localdomain/37517] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-05-31 13:54:10,425 INFO [Listener at localhost.localdomain/37517] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:54:10,425 INFO [Listener at localhost.localdomain/37517] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 13:54:10,425 INFO [Listener at localhost.localdomain/37517] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 13:54:10,425 INFO [Listener at localhost.localdomain/37517] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:54:10,426 INFO [Listener at localhost.localdomain/37517] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 13:54:10,426 INFO [Listener at localhost.localdomain/37517] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 13:54:10,427 INFO [Listener at localhost.localdomain/37517] ipc.NettyRpcServer(120): Bind to /136.243.18.41:36801 2023-05-31 13:54:10,428 INFO [Listener at localhost.localdomain/37517] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-31 13:54:10,432 DEBUG [Listener at localhost.localdomain/37517] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-31 13:54:10,434 INFO [Listener at localhost.localdomain/37517] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:54:10,435 INFO [Listener at localhost.localdomain/37517] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:54:10,437 INFO [Listener at localhost.localdomain/37517] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36801 connecting to ZooKeeper ensemble=127.0.0.1:57632 2023-05-31 13:54:10,440 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): regionserver:368010x0, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 13:54:10,444 DEBUG [Listener at localhost.localdomain/37517] zookeeper.ZKUtil(164): regionserver:368010x0, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 13:54:10,444 DEBUG [Listener at localhost.localdomain/37517] zookeeper.ZKUtil(164): regionserver:368010x0, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:54:10,445 DEBUG [Listener at localhost.localdomain/37517] zookeeper.ZKUtil(164): regionserver:368010x0, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 13:54:10,449 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36801-0x1008183e0240001 connected 2023-05-31 13:54:10,449 DEBUG [Listener at localhost.localdomain/37517] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36801 2023-05-31 13:54:10,451 DEBUG [Listener at localhost.localdomain/37517] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36801 2023-05-31 13:54:10,456 DEBUG [Listener at localhost.localdomain/37517] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36801 2023-05-31 13:54:10,457 DEBUG [Listener at localhost.localdomain/37517] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36801 2023-05-31 13:54:10,460 DEBUG [Listener at localhost.localdomain/37517] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36801 2023-05-31 13:54:10,461 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,33819,1685541250347 2023-05-31 13:54:10,463 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 13:54:10,463 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,33819,1685541250347 2023-05-31 13:54:10,464 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008183e0240001, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 13:54:10,464 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 13:54:10,465 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:54:10,466 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 13:54:10,468 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,33819,1685541250347 from backup master directory 2023-05-31 13:54:10,469 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 13:54:10,470 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,33819,1685541250347 2023-05-31 13:54:10,470 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 13:54:10,470 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 13:54:10,470 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,33819,1685541250347 2023-05-31 13:54:10,503 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/hbase.id with ID: a844cb0b-cc88-4160-85db-ebc76b96fcc9 2023-05-31 13:54:10,520 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:54:10,522 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:54:10,540 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x4403c638 to 127.0.0.1:57632 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 13:54:10,547 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@699dbe57, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 13:54:10,547 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 13:54:10,548 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-31 13:54:10,548 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 13:54:10,550 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/MasterData/data/master/store-tmp 2023-05-31 13:54:10,562 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:54:10,562 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 13:54:10,562 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:54:10,563 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:54:10,563 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 13:54:10,563 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:54:10,563 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:54:10,563 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 13:54:10,564 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/MasterData/WALs/jenkins-hbase17.apache.org,33819,1685541250347 2023-05-31 13:54:10,567 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C33819%2C1685541250347, suffix=, logDir=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/MasterData/WALs/jenkins-hbase17.apache.org,33819,1685541250347, archiveDir=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/MasterData/oldWALs, maxLogs=10 2023-05-31 13:54:10,577 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/MasterData/WALs/jenkins-hbase17.apache.org,33819,1685541250347/jenkins-hbase17.apache.org%2C33819%2C1685541250347.1685541250567 2023-05-31 13:54:10,577 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37969,DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4,DISK], DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK]] 2023-05-31 13:54:10,577 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-31 13:54:10,577 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:54:10,577 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:54:10,577 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:54:10,580 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:54:10,583 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-31 13:54:10,584 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-31 13:54:10,585 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:54:10,588 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:54:10,589 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:54:10,592 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:54:10,597 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 13:54:10,598 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=730979, jitterRate=-0.07051236927509308}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 13:54:10,598 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 13:54:10,598 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-31 13:54:10,599 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-31 13:54:10,600 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-31 13:54:10,600 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-31 13:54:10,605 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 4 msec 2023-05-31 13:54:10,605 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-31 13:54:10,606 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-31 13:54:10,610 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-31 13:54:10,612 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-31 13:54:10,625 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-31 13:54:10,626 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-31 13:54:10,627 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-31 13:54:10,627 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-31 13:54:10,627 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-31 13:54:10,630 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:54:10,631 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-31 13:54:10,632 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-31 13:54:10,633 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-31 13:54:10,636 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 13:54:10,637 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:54:10,636 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008183e0240001, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 13:54:10,637 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,33819,1685541250347, sessionid=0x1008183e0240000, setting cluster-up flag (Was=false) 2023-05-31 13:54:10,642 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:54:10,646 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-31 13:54:10,647 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,33819,1685541250347 2023-05-31 13:54:10,650 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:54:10,653 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-31 13:54:10,654 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,33819,1685541250347 2023-05-31 13:54:10,656 WARN [master/jenkins-hbase17:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/.hbase-snapshot/.tmp 2023-05-31 13:54:10,666 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-31 13:54:10,668 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-05-31 13:54:10,668 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-05-31 13:54:10,668 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-05-31 13:54:10,668 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-05-31 13:54:10,668 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-05-31 13:54:10,668 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:54:10,668 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-05-31 13:54:10,669 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:54:10,671 INFO [RS:0;jenkins-hbase17:36801] regionserver.HRegionServer(951): ClusterId : a844cb0b-cc88-4160-85db-ebc76b96fcc9 2023-05-31 13:54:10,671 DEBUG [RS:0;jenkins-hbase17:36801] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-31 13:54:10,684 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685541280684 2023-05-31 13:54:10,685 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-31 13:54:10,685 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-31 13:54:10,685 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-31 13:54:10,685 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-31 13:54:10,685 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-31 13:54:10,685 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-31 13:54:10,686 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 13:54:10,686 DEBUG [RS:0;jenkins-hbase17:36801] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-31 13:54:10,686 DEBUG [RS:0;jenkins-hbase17:36801] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-31 13:54:10,693 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-31 13:54:10,693 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-31 13:54:10,693 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 13:54:10,693 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-31 13:54:10,693 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-31 13:54:10,695 DEBUG [RS:0;jenkins-hbase17:36801] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-31 13:54:10,695 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 13:54:10,701 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-31 13:54:10,701 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-31 13:54:10,703 DEBUG [RS:0;jenkins-hbase17:36801] zookeeper.ReadOnlyZKClient(139): Connect 0x390cd212 to 127.0.0.1:57632 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 13:54:10,708 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1685541250704,5,FailOnTimeoutGroup] 2023-05-31 13:54:10,714 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1685541250714,5,FailOnTimeoutGroup] 2023-05-31 13:54:10,714 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 13:54:10,715 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-31 13:54:10,715 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-31 13:54:10,715 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-31 13:54:10,729 DEBUG [RS:0;jenkins-hbase17:36801] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@37cb3b43, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 13:54:10,732 DEBUG [RS:0;jenkins-hbase17:36801] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@33f4b962, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-05-31 13:54:10,747 DEBUG [RS:0;jenkins-hbase17:36801] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:36801 2023-05-31 13:54:10,747 INFO [RS:0;jenkins-hbase17:36801] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-31 13:54:10,747 INFO [RS:0;jenkins-hbase17:36801] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-31 13:54:10,747 DEBUG [RS:0;jenkins-hbase17:36801] regionserver.HRegionServer(1022): About to register with Master. 2023-05-31 13:54:10,748 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 13:54:10,749 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 13:54:10,749 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9 2023-05-31 13:54:10,750 INFO [RS:0;jenkins-hbase17:36801] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase17.apache.org,33819,1685541250347 with isa=jenkins-hbase17.apache.org/136.243.18.41:36801, startcode=1685541250424 2023-05-31 13:54:10,750 DEBUG [RS:0;jenkins-hbase17:36801] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-31 13:54:10,766 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:50671, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-05-31 13:54:10,769 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33819] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,36801,1685541250424 2023-05-31 13:54:10,770 DEBUG [RS:0;jenkins-hbase17:36801] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9 2023-05-31 13:54:10,770 DEBUG [RS:0;jenkins-hbase17:36801] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:34425 2023-05-31 13:54:10,770 DEBUG [RS:0;jenkins-hbase17:36801] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-31 13:54:10,774 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 13:54:10,774 DEBUG [RS:0;jenkins-hbase17:36801] zookeeper.ZKUtil(162): regionserver:36801-0x1008183e0240001, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36801,1685541250424 2023-05-31 13:54:10,774 WARN [RS:0;jenkins-hbase17:36801] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 13:54:10,774 INFO [RS:0;jenkins-hbase17:36801] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 13:54:10,774 DEBUG [RS:0;jenkins-hbase17:36801] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36801,1685541250424 2023-05-31 13:54:10,784 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,36801,1685541250424] 2023-05-31 13:54:10,789 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:54:10,791 DEBUG [RS:0;jenkins-hbase17:36801] zookeeper.ZKUtil(162): regionserver:36801-0x1008183e0240001, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36801,1685541250424 2023-05-31 13:54:10,792 DEBUG [RS:0;jenkins-hbase17:36801] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-31 13:54:10,792 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 13:54:10,792 INFO [RS:0;jenkins-hbase17:36801] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-31 13:54:10,802 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/hbase/meta/1588230740/info 2023-05-31 13:54:10,803 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 13:54:10,804 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:54:10,804 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 13:54:10,806 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/hbase/meta/1588230740/rep_barrier 2023-05-31 13:54:10,806 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 13:54:10,807 INFO [RS:0;jenkins-hbase17:36801] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-31 13:54:10,814 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:54:10,814 INFO [RS:0;jenkins-hbase17:36801] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-31 13:54:10,814 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 13:54:10,814 INFO [RS:0;jenkins-hbase17:36801] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 13:54:10,815 INFO [RS:0;jenkins-hbase17:36801] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-31 13:54:10,816 INFO [RS:0;jenkins-hbase17:36801] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-31 13:54:10,816 DEBUG [RS:0;jenkins-hbase17:36801] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:54:10,816 DEBUG [RS:0;jenkins-hbase17:36801] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:54:10,817 DEBUG [RS:0;jenkins-hbase17:36801] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:54:10,817 DEBUG [RS:0;jenkins-hbase17:36801] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:54:10,817 DEBUG [RS:0;jenkins-hbase17:36801] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:54:10,817 DEBUG [RS:0;jenkins-hbase17:36801] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-05-31 13:54:10,817 DEBUG [RS:0;jenkins-hbase17:36801] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:54:10,817 DEBUG [RS:0;jenkins-hbase17:36801] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:54:10,817 DEBUG [RS:0;jenkins-hbase17:36801] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:54:10,817 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/hbase/meta/1588230740/table 2023-05-31 13:54:10,817 DEBUG [RS:0;jenkins-hbase17:36801] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:54:10,818 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 13:54:10,818 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:54:10,822 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/hbase/meta/1588230740 2023-05-31 13:54:10,827 INFO [RS:0;jenkins-hbase17:36801] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 13:54:10,828 INFO [RS:0;jenkins-hbase17:36801] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 13:54:10,828 INFO [RS:0;jenkins-hbase17:36801] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-31 13:54:10,829 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/hbase/meta/1588230740 2023-05-31 13:54:10,832 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 13:54:10,834 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 13:54:10,840 INFO [RS:0;jenkins-hbase17:36801] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-31 13:54:10,841 INFO [RS:0;jenkins-hbase17:36801] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,36801,1685541250424-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 13:54:10,842 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 13:54:10,842 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=779890, jitterRate=-0.008319124579429626}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 13:54:10,843 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 13:54:10,843 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 13:54:10,843 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 13:54:10,843 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 13:54:10,843 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 13:54:10,843 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 13:54:10,843 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 13:54:10,844 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 13:54:10,846 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 13:54:10,846 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-31 13:54:10,846 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-31 13:54:10,848 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-31 13:54:10,850 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-31 13:54:10,860 INFO [RS:0;jenkins-hbase17:36801] regionserver.Replication(203): jenkins-hbase17.apache.org,36801,1685541250424 started 2023-05-31 13:54:10,860 INFO [RS:0;jenkins-hbase17:36801] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,36801,1685541250424, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:36801, sessionid=0x1008183e0240001 2023-05-31 13:54:10,860 DEBUG [RS:0;jenkins-hbase17:36801] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-31 13:54:10,860 DEBUG [RS:0;jenkins-hbase17:36801] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,36801,1685541250424 2023-05-31 13:54:10,860 DEBUG [RS:0;jenkins-hbase17:36801] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,36801,1685541250424' 2023-05-31 13:54:10,860 DEBUG [RS:0;jenkins-hbase17:36801] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 13:54:10,861 DEBUG [RS:0;jenkins-hbase17:36801] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 13:54:10,862 DEBUG [RS:0;jenkins-hbase17:36801] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-31 13:54:10,862 DEBUG [RS:0;jenkins-hbase17:36801] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-31 13:54:10,862 DEBUG [RS:0;jenkins-hbase17:36801] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,36801,1685541250424 2023-05-31 13:54:10,862 DEBUG [RS:0;jenkins-hbase17:36801] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,36801,1685541250424' 2023-05-31 13:54:10,862 DEBUG [RS:0;jenkins-hbase17:36801] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-31 13:54:10,862 DEBUG [RS:0;jenkins-hbase17:36801] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-31 13:54:10,863 DEBUG [RS:0;jenkins-hbase17:36801] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-31 13:54:10,863 INFO [RS:0;jenkins-hbase17:36801] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-31 13:54:10,863 INFO [RS:0;jenkins-hbase17:36801] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-31 13:54:10,965 INFO [RS:0;jenkins-hbase17:36801] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C36801%2C1685541250424, suffix=, logDir=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36801,1685541250424, archiveDir=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/oldWALs, maxLogs=32 2023-05-31 13:54:10,988 INFO [RS:0;jenkins-hbase17:36801] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36801,1685541250424/jenkins-hbase17.apache.org%2C36801%2C1685541250424.1685541250967 2023-05-31 13:54:10,988 DEBUG [RS:0;jenkins-hbase17:36801] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK], DatanodeInfoWithStorage[127.0.0.1:37969,DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4,DISK]] 2023-05-31 13:54:11,000 DEBUG [jenkins-hbase17:33819] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-31 13:54:11,002 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,36801,1685541250424, state=OPENING 2023-05-31 13:54:11,003 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-31 13:54:11,004 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:54:11,004 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,36801,1685541250424}] 2023-05-31 13:54:11,004 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 13:54:11,160 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,36801,1685541250424 2023-05-31 13:54:11,160 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-31 13:54:11,163 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:55054, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-31 13:54:11,169 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-31 13:54:11,169 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 13:54:11,172 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C36801%2C1685541250424.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36801,1685541250424, archiveDir=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/oldWALs, maxLogs=32 2023-05-31 13:54:11,191 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36801,1685541250424/jenkins-hbase17.apache.org%2C36801%2C1685541250424.meta.1685541251174.meta 2023-05-31 13:54:11,191 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK], DatanodeInfoWithStorage[127.0.0.1:37969,DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4,DISK]] 2023-05-31 13:54:11,191 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-31 13:54:11,192 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-31 13:54:11,192 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-31 13:54:11,193 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-31 13:54:11,193 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-31 13:54:11,193 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:54:11,193 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-31 13:54:11,194 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-31 13:54:11,196 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 13:54:11,198 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/hbase/meta/1588230740/info 2023-05-31 13:54:11,198 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/hbase/meta/1588230740/info 2023-05-31 13:54:11,198 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 13:54:11,199 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:54:11,199 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 13:54:11,200 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/hbase/meta/1588230740/rep_barrier 2023-05-31 13:54:11,200 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/hbase/meta/1588230740/rep_barrier 2023-05-31 13:54:11,201 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 13:54:11,202 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:54:11,202 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 13:54:11,203 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/hbase/meta/1588230740/table 2023-05-31 13:54:11,203 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/hbase/meta/1588230740/table 2023-05-31 13:54:11,205 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 13:54:11,206 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:54:11,207 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/hbase/meta/1588230740 2023-05-31 13:54:11,209 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/hbase/meta/1588230740 2023-05-31 13:54:11,212 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 13:54:11,214 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 13:54:11,215 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=873746, jitterRate=0.11102615296840668}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 13:54:11,215 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 13:54:11,216 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685541251160 2023-05-31 13:54:11,220 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-31 13:54:11,220 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-31 13:54:11,221 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,36801,1685541250424, state=OPEN 2023-05-31 13:54:11,222 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-31 13:54:11,222 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 13:54:11,225 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-31 13:54:11,225 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,36801,1685541250424 in 218 msec 2023-05-31 13:54:11,228 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-31 13:54:11,228 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 379 msec 2023-05-31 13:54:11,231 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 568 msec 2023-05-31 13:54:11,231 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685541251231, completionTime=-1 2023-05-31 13:54:11,231 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-31 13:54:11,231 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-31 13:54:11,234 DEBUG [hconnection-0x40dc21c9-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 13:54:11,236 INFO [RS-EventLoopGroup-6-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:55070, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 13:54:11,237 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-31 13:54:11,237 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685541311237 2023-05-31 13:54:11,237 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685541371237 2023-05-31 13:54:11,237 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-31 13:54:11,243 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,33819,1685541250347-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 13:54:11,243 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,33819,1685541250347-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 13:54:11,243 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,33819,1685541250347-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 13:54:11,243 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:33819, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 13:54:11,243 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-31 13:54:11,243 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-31 13:54:11,243 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 13:54:11,245 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-31 13:54:11,245 DEBUG [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-31 13:54:11,247 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 13:54:11,248 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 13:54:11,250 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/.tmp/data/hbase/namespace/1bb7196a0ec56257d44ee6fb4cf0d1e5 2023-05-31 13:54:11,250 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/.tmp/data/hbase/namespace/1bb7196a0ec56257d44ee6fb4cf0d1e5 empty. 2023-05-31 13:54:11,250 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/.tmp/data/hbase/namespace/1bb7196a0ec56257d44ee6fb4cf0d1e5 2023-05-31 13:54:11,251 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-31 13:54:11,267 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-31 13:54:11,268 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1bb7196a0ec56257d44ee6fb4cf0d1e5, NAME => 'hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/.tmp 2023-05-31 13:54:11,291 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:54:11,291 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 1bb7196a0ec56257d44ee6fb4cf0d1e5, disabling compactions & flushes 2023-05-31 13:54:11,291 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5. 2023-05-31 13:54:11,291 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5. 2023-05-31 13:54:11,291 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5. after waiting 0 ms 2023-05-31 13:54:11,291 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5. 2023-05-31 13:54:11,291 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5. 2023-05-31 13:54:11,291 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 1bb7196a0ec56257d44ee6fb4cf0d1e5: 2023-05-31 13:54:11,295 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 13:54:11,297 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685541251296"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685541251296"}]},"ts":"1685541251296"} 2023-05-31 13:54:11,299 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 13:54:11,301 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 13:54:11,301 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685541251301"}]},"ts":"1685541251301"} 2023-05-31 13:54:11,303 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-31 13:54:11,307 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=1bb7196a0ec56257d44ee6fb4cf0d1e5, ASSIGN}] 2023-05-31 13:54:11,309 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=1bb7196a0ec56257d44ee6fb4cf0d1e5, ASSIGN 2023-05-31 13:54:11,311 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=1bb7196a0ec56257d44ee6fb4cf0d1e5, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,36801,1685541250424; forceNewPlan=false, retain=false 2023-05-31 13:54:11,462 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=1bb7196a0ec56257d44ee6fb4cf0d1e5, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36801,1685541250424 2023-05-31 13:54:11,462 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685541251462"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685541251462"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685541251462"}]},"ts":"1685541251462"} 2023-05-31 13:54:11,466 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 1bb7196a0ec56257d44ee6fb4cf0d1e5, server=jenkins-hbase17.apache.org,36801,1685541250424}] 2023-05-31 13:54:11,624 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5. 2023-05-31 13:54:11,624 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1bb7196a0ec56257d44ee6fb4cf0d1e5, NAME => 'hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5.', STARTKEY => '', ENDKEY => ''} 2023-05-31 13:54:11,624 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 1bb7196a0ec56257d44ee6fb4cf0d1e5 2023-05-31 13:54:11,624 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:54:11,624 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1bb7196a0ec56257d44ee6fb4cf0d1e5 2023-05-31 13:54:11,624 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1bb7196a0ec56257d44ee6fb4cf0d1e5 2023-05-31 13:54:11,626 INFO [StoreOpener-1bb7196a0ec56257d44ee6fb4cf0d1e5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1bb7196a0ec56257d44ee6fb4cf0d1e5 2023-05-31 13:54:11,627 DEBUG [StoreOpener-1bb7196a0ec56257d44ee6fb4cf0d1e5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/hbase/namespace/1bb7196a0ec56257d44ee6fb4cf0d1e5/info 2023-05-31 13:54:11,627 DEBUG [StoreOpener-1bb7196a0ec56257d44ee6fb4cf0d1e5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/hbase/namespace/1bb7196a0ec56257d44ee6fb4cf0d1e5/info 2023-05-31 13:54:11,628 INFO [StoreOpener-1bb7196a0ec56257d44ee6fb4cf0d1e5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1bb7196a0ec56257d44ee6fb4cf0d1e5 columnFamilyName info 2023-05-31 13:54:11,629 INFO [StoreOpener-1bb7196a0ec56257d44ee6fb4cf0d1e5-1] regionserver.HStore(310): Store=1bb7196a0ec56257d44ee6fb4cf0d1e5/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:54:11,630 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/hbase/namespace/1bb7196a0ec56257d44ee6fb4cf0d1e5 2023-05-31 13:54:11,631 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/hbase/namespace/1bb7196a0ec56257d44ee6fb4cf0d1e5 2023-05-31 13:54:11,635 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1bb7196a0ec56257d44ee6fb4cf0d1e5 2023-05-31 13:54:11,637 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/hbase/namespace/1bb7196a0ec56257d44ee6fb4cf0d1e5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 13:54:11,637 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1bb7196a0ec56257d44ee6fb4cf0d1e5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=807116, jitterRate=0.026301100850105286}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 13:54:11,637 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1bb7196a0ec56257d44ee6fb4cf0d1e5: 2023-05-31 13:54:11,639 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5., pid=6, masterSystemTime=1685541251619 2023-05-31 13:54:11,641 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5. 2023-05-31 13:54:11,641 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5. 2023-05-31 13:54:11,642 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=1bb7196a0ec56257d44ee6fb4cf0d1e5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,36801,1685541250424 2023-05-31 13:54:11,642 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685541251642"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685541251642"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685541251642"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685541251642"}]},"ts":"1685541251642"} 2023-05-31 13:54:11,647 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-31 13:54:11,647 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 1bb7196a0ec56257d44ee6fb4cf0d1e5, server=jenkins-hbase17.apache.org,36801,1685541250424 in 178 msec 2023-05-31 13:54:11,649 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-31 13:54:11,650 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=1bb7196a0ec56257d44ee6fb4cf0d1e5, ASSIGN in 340 msec 2023-05-31 13:54:11,651 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 13:54:11,651 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685541251651"}]},"ts":"1685541251651"} 2023-05-31 13:54:11,653 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-31 13:54:11,656 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 13:54:11,659 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 413 msec 2023-05-31 13:54:11,747 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-31 13:54:11,748 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-31 13:54:11,749 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:54:11,757 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-31 13:54:11,766 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 13:54:11,770 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-05-31 13:54:11,779 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-31 13:54:11,787 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 13:54:11,791 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 11 msec 2023-05-31 13:54:11,803 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-31 13:54:11,805 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-31 13:54:11,805 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.335sec 2023-05-31 13:54:11,805 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-31 13:54:11,805 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-31 13:54:11,805 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-31 13:54:11,805 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,33819,1685541250347-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-31 13:54:11,805 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,33819,1685541250347-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-31 13:54:11,809 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-31 13:54:11,866 DEBUG [Listener at localhost.localdomain/37517] zookeeper.ReadOnlyZKClient(139): Connect 0x55a41fc1 to 127.0.0.1:57632 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 13:54:11,876 DEBUG [Listener at localhost.localdomain/37517] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5ab9babd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 13:54:11,880 DEBUG [hconnection-0x4f2b07cb-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 13:54:11,883 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:55082, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 13:54:11,886 INFO [Listener at localhost.localdomain/37517] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase17.apache.org,33819,1685541250347 2023-05-31 13:54:11,887 INFO [Listener at localhost.localdomain/37517] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:54:11,891 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-31 13:54:11,891 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:54:11,893 INFO [Listener at localhost.localdomain/37517] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-31 13:54:11,905 INFO [Listener at localhost.localdomain/37517] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-05-31 13:54:11,905 INFO [Listener at localhost.localdomain/37517] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:54:11,905 INFO [Listener at localhost.localdomain/37517] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 13:54:11,905 INFO [Listener at localhost.localdomain/37517] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 13:54:11,905 INFO [Listener at localhost.localdomain/37517] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:54:11,905 INFO [Listener at localhost.localdomain/37517] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 13:54:11,905 INFO [Listener at localhost.localdomain/37517] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 13:54:11,907 INFO [Listener at localhost.localdomain/37517] ipc.NettyRpcServer(120): Bind to /136.243.18.41:36457 2023-05-31 13:54:11,907 INFO [Listener at localhost.localdomain/37517] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-31 13:54:11,908 DEBUG [Listener at localhost.localdomain/37517] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-31 13:54:11,909 INFO [Listener at localhost.localdomain/37517] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:54:11,910 INFO [Listener at localhost.localdomain/37517] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:54:11,910 INFO [Listener at localhost.localdomain/37517] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36457 connecting to ZooKeeper ensemble=127.0.0.1:57632 2023-05-31 13:54:11,913 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): regionserver:364570x0, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 13:54:11,914 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36457-0x1008183e0240005 connected 2023-05-31 13:54:11,914 DEBUG [Listener at localhost.localdomain/37517] zookeeper.ZKUtil(162): regionserver:36457-0x1008183e0240005, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 13:54:11,915 DEBUG [Listener at localhost.localdomain/37517] zookeeper.ZKUtil(162): regionserver:36457-0x1008183e0240005, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-05-31 13:54:11,916 DEBUG [Listener at localhost.localdomain/37517] zookeeper.ZKUtil(164): regionserver:36457-0x1008183e0240005, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 13:54:11,916 DEBUG [Listener at localhost.localdomain/37517] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36457 2023-05-31 13:54:11,916 DEBUG [Listener at localhost.localdomain/37517] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36457 2023-05-31 13:54:11,916 DEBUG [Listener at localhost.localdomain/37517] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36457 2023-05-31 13:54:11,917 DEBUG [Listener at localhost.localdomain/37517] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36457 2023-05-31 13:54:11,917 DEBUG [Listener at localhost.localdomain/37517] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36457 2023-05-31 13:54:11,921 INFO [RS:1;jenkins-hbase17:36457] regionserver.HRegionServer(951): ClusterId : a844cb0b-cc88-4160-85db-ebc76b96fcc9 2023-05-31 13:54:11,922 DEBUG [RS:1;jenkins-hbase17:36457] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-31 13:54:11,925 DEBUG [RS:1;jenkins-hbase17:36457] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-31 13:54:11,925 DEBUG [RS:1;jenkins-hbase17:36457] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-31 13:54:11,926 DEBUG [RS:1;jenkins-hbase17:36457] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-31 13:54:11,927 DEBUG [RS:1;jenkins-hbase17:36457] zookeeper.ReadOnlyZKClient(139): Connect 0x77fd7911 to 127.0.0.1:57632 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 13:54:11,931 DEBUG [RS:1;jenkins-hbase17:36457] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@793a899e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 13:54:11,931 DEBUG [RS:1;jenkins-hbase17:36457] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7d051261, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-05-31 13:54:11,937 DEBUG [RS:1;jenkins-hbase17:36457] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase17:36457 2023-05-31 13:54:11,938 INFO [RS:1;jenkins-hbase17:36457] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-31 13:54:11,939 INFO [RS:1;jenkins-hbase17:36457] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-31 13:54:11,939 DEBUG [RS:1;jenkins-hbase17:36457] regionserver.HRegionServer(1022): About to register with Master. 2023-05-31 13:54:11,939 INFO [RS:1;jenkins-hbase17:36457] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase17.apache.org,33819,1685541250347 with isa=jenkins-hbase17.apache.org/136.243.18.41:36457, startcode=1685541251905 2023-05-31 13:54:11,940 DEBUG [RS:1;jenkins-hbase17:36457] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-31 13:54:11,943 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:43385, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-05-31 13:54:11,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33819] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,36457,1685541251905 2023-05-31 13:54:11,944 DEBUG [RS:1;jenkins-hbase17:36457] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9 2023-05-31 13:54:11,944 DEBUG [RS:1;jenkins-hbase17:36457] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:34425 2023-05-31 13:54:11,944 DEBUG [RS:1;jenkins-hbase17:36457] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-31 13:54:11,945 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008183e0240001, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 13:54:11,945 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 13:54:11,945 DEBUG [RS:1;jenkins-hbase17:36457] zookeeper.ZKUtil(162): regionserver:36457-0x1008183e0240005, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36457,1685541251905 2023-05-31 13:54:11,946 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,36457,1685541251905] 2023-05-31 13:54:11,946 WARN [RS:1;jenkins-hbase17:36457] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 13:54:11,946 INFO [RS:1;jenkins-hbase17:36457] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 13:54:11,946 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36801-0x1008183e0240001, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36801,1685541250424 2023-05-31 13:54:11,946 DEBUG [RS:1;jenkins-hbase17:36457] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905 2023-05-31 13:54:11,946 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36801-0x1008183e0240001, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36457,1685541251905 2023-05-31 13:54:11,955 DEBUG [RS:1;jenkins-hbase17:36457] zookeeper.ZKUtil(162): regionserver:36457-0x1008183e0240005, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36801,1685541250424 2023-05-31 13:54:11,956 DEBUG [RS:1;jenkins-hbase17:36457] zookeeper.ZKUtil(162): regionserver:36457-0x1008183e0240005, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36457,1685541251905 2023-05-31 13:54:11,956 DEBUG [RS:1;jenkins-hbase17:36457] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-31 13:54:11,957 INFO [RS:1;jenkins-hbase17:36457] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-31 13:54:11,959 INFO [RS:1;jenkins-hbase17:36457] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-31 13:54:11,960 INFO [RS:1;jenkins-hbase17:36457] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-31 13:54:11,960 INFO [RS:1;jenkins-hbase17:36457] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 13:54:11,960 INFO [RS:1;jenkins-hbase17:36457] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-31 13:54:11,962 INFO [RS:1;jenkins-hbase17:36457] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-31 13:54:11,962 DEBUG [RS:1;jenkins-hbase17:36457] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:54:11,962 DEBUG [RS:1;jenkins-hbase17:36457] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:54:11,962 DEBUG [RS:1;jenkins-hbase17:36457] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:54:11,962 DEBUG [RS:1;jenkins-hbase17:36457] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:54:11,962 DEBUG [RS:1;jenkins-hbase17:36457] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:54:11,962 DEBUG [RS:1;jenkins-hbase17:36457] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-05-31 13:54:11,962 DEBUG [RS:1;jenkins-hbase17:36457] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:54:11,962 DEBUG [RS:1;jenkins-hbase17:36457] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:54:11,962 DEBUG [RS:1;jenkins-hbase17:36457] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:54:11,962 DEBUG [RS:1;jenkins-hbase17:36457] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:54:11,963 INFO [RS:1;jenkins-hbase17:36457] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 13:54:11,963 INFO [RS:1;jenkins-hbase17:36457] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 13:54:11,963 INFO [RS:1;jenkins-hbase17:36457] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-31 13:54:11,973 INFO [RS:1;jenkins-hbase17:36457] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-31 13:54:11,973 INFO [RS:1;jenkins-hbase17:36457] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,36457,1685541251905-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 13:54:11,983 INFO [RS:1;jenkins-hbase17:36457] regionserver.Replication(203): jenkins-hbase17.apache.org,36457,1685541251905 started 2023-05-31 13:54:11,983 INFO [RS:1;jenkins-hbase17:36457] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,36457,1685541251905, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:36457, sessionid=0x1008183e0240005 2023-05-31 13:54:11,983 INFO [Listener at localhost.localdomain/37517] hbase.HBaseTestingUtility(3254): Started new server=Thread[RS:1;jenkins-hbase17:36457,5,FailOnTimeoutGroup] 2023-05-31 13:54:11,983 DEBUG [RS:1;jenkins-hbase17:36457] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-31 13:54:11,983 INFO [Listener at localhost.localdomain/37517] wal.TestLogRolling(323): Replication=2 2023-05-31 13:54:11,983 DEBUG [RS:1;jenkins-hbase17:36457] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,36457,1685541251905 2023-05-31 13:54:11,984 DEBUG [RS:1;jenkins-hbase17:36457] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,36457,1685541251905' 2023-05-31 13:54:11,984 DEBUG [RS:1;jenkins-hbase17:36457] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 13:54:11,985 DEBUG [RS:1;jenkins-hbase17:36457] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 13:54:11,986 DEBUG [Listener at localhost.localdomain/37517] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-31 13:54:11,986 DEBUG [RS:1;jenkins-hbase17:36457] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-31 13:54:11,986 DEBUG [RS:1;jenkins-hbase17:36457] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-31 13:54:11,986 DEBUG [RS:1;jenkins-hbase17:36457] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,36457,1685541251905 2023-05-31 13:54:11,987 DEBUG [RS:1;jenkins-hbase17:36457] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,36457,1685541251905' 2023-05-31 13:54:11,987 DEBUG [RS:1;jenkins-hbase17:36457] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-31 13:54:11,988 DEBUG [RS:1;jenkins-hbase17:36457] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-31 13:54:11,988 DEBUG [RS:1;jenkins-hbase17:36457] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-31 13:54:11,988 INFO [RS:1;jenkins-hbase17:36457] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-31 13:54:11,989 INFO [RS:1;jenkins-hbase17:36457] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-31 13:54:11,989 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:54374, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-31 13:54:11,991 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33819] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-31 13:54:11,991 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33819] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-31 13:54:11,991 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33819] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 13:54:11,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33819] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath 2023-05-31 13:54:11,995 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 13:54:11,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33819] master.MasterRpcServices(697): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnDatanodeDeath" procId is: 9 2023-05-31 13:54:11,996 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 13:54:11,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33819] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 13:54:11,998 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/15c8475d34a09d36f3534cee8c1acda7 2023-05-31 13:54:11,998 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/15c8475d34a09d36f3534cee8c1acda7 empty. 2023-05-31 13:54:11,999 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/15c8475d34a09d36f3534cee8c1acda7 2023-05-31 13:54:11,999 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnDatanodeDeath regions 2023-05-31 13:54:12,013 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/.tabledesc/.tableinfo.0000000001 2023-05-31 13:54:12,014 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(7675): creating {ENCODED => 15c8475d34a09d36f3534cee8c1acda7, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1685541251991.15c8475d34a09d36f3534cee8c1acda7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/.tmp 2023-05-31 13:54:12,027 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1685541251991.15c8475d34a09d36f3534cee8c1acda7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:54:12,027 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1604): Closing 15c8475d34a09d36f3534cee8c1acda7, disabling compactions & flushes 2023-05-31 13:54:12,027 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685541251991.15c8475d34a09d36f3534cee8c1acda7. 2023-05-31 13:54:12,028 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685541251991.15c8475d34a09d36f3534cee8c1acda7. 2023-05-31 13:54:12,028 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685541251991.15c8475d34a09d36f3534cee8c1acda7. after waiting 0 ms 2023-05-31 13:54:12,028 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685541251991.15c8475d34a09d36f3534cee8c1acda7. 2023-05-31 13:54:12,028 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685541251991.15c8475d34a09d36f3534cee8c1acda7. 2023-05-31 13:54:12,028 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1558): Region close journal for 15c8475d34a09d36f3534cee8c1acda7: 2023-05-31 13:54:12,031 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 13:54:12,034 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685541251991.15c8475d34a09d36f3534cee8c1acda7.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685541252033"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685541252033"}]},"ts":"1685541252033"} 2023-05-31 13:54:12,036 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 13:54:12,037 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 13:54:12,038 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685541252037"}]},"ts":"1685541252037"} 2023-05-31 13:54:12,039 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLING in hbase:meta 2023-05-31 13:54:12,045 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-05-31 13:54:12,047 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-05-31 13:54:12,047 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-05-31 13:54:12,047 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-05-31 13:54:12,048 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=15c8475d34a09d36f3534cee8c1acda7, ASSIGN}] 2023-05-31 13:54:12,050 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=15c8475d34a09d36f3534cee8c1acda7, ASSIGN 2023-05-31 13:54:12,051 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=15c8475d34a09d36f3534cee8c1acda7, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,36457,1685541251905; forceNewPlan=false, retain=false 2023-05-31 13:54:12,093 INFO [RS:1;jenkins-hbase17:36457] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C36457%2C1685541251905, suffix=, logDir=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905, archiveDir=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/oldWALs, maxLogs=32 2023-05-31 13:54:12,112 INFO [RS:1;jenkins-hbase17:36457] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541252096 2023-05-31 13:54:12,112 DEBUG [RS:1;jenkins-hbase17:36457] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK], DatanodeInfoWithStorage[127.0.0.1:37969,DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4,DISK]] 2023-05-31 13:54:12,206 INFO [jenkins-hbase17:33819] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-05-31 13:54:12,208 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=15c8475d34a09d36f3534cee8c1acda7, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36457,1685541251905 2023-05-31 13:54:12,209 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685541251991.15c8475d34a09d36f3534cee8c1acda7.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685541252208"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685541252208"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685541252208"}]},"ts":"1685541252208"} 2023-05-31 13:54:12,214 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 15c8475d34a09d36f3534cee8c1acda7, server=jenkins-hbase17.apache.org,36457,1685541251905}] 2023-05-31 13:54:12,371 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,36457,1685541251905 2023-05-31 13:54:12,371 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-31 13:54:12,378 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:56266, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-31 13:54:12,386 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnDatanodeDeath,,1685541251991.15c8475d34a09d36f3534cee8c1acda7. 2023-05-31 13:54:12,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 15c8475d34a09d36f3534cee8c1acda7, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1685541251991.15c8475d34a09d36f3534cee8c1acda7.', STARTKEY => '', ENDKEY => ''} 2023-05-31 13:54:12,387 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnDatanodeDeath 15c8475d34a09d36f3534cee8c1acda7 2023-05-31 13:54:12,387 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1685541251991.15c8475d34a09d36f3534cee8c1acda7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:54:12,387 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 15c8475d34a09d36f3534cee8c1acda7 2023-05-31 13:54:12,387 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 15c8475d34a09d36f3534cee8c1acda7 2023-05-31 13:54:12,389 INFO [StoreOpener-15c8475d34a09d36f3534cee8c1acda7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 15c8475d34a09d36f3534cee8c1acda7 2023-05-31 13:54:12,391 DEBUG [StoreOpener-15c8475d34a09d36f3534cee8c1acda7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/default/TestLogRolling-testLogRollOnDatanodeDeath/15c8475d34a09d36f3534cee8c1acda7/info 2023-05-31 13:54:12,391 DEBUG [StoreOpener-15c8475d34a09d36f3534cee8c1acda7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/default/TestLogRolling-testLogRollOnDatanodeDeath/15c8475d34a09d36f3534cee8c1acda7/info 2023-05-31 13:54:12,392 INFO [StoreOpener-15c8475d34a09d36f3534cee8c1acda7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 15c8475d34a09d36f3534cee8c1acda7 columnFamilyName info 2023-05-31 13:54:12,393 INFO [StoreOpener-15c8475d34a09d36f3534cee8c1acda7-1] regionserver.HStore(310): Store=15c8475d34a09d36f3534cee8c1acda7/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:54:12,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/default/TestLogRolling-testLogRollOnDatanodeDeath/15c8475d34a09d36f3534cee8c1acda7 2023-05-31 13:54:12,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/default/TestLogRolling-testLogRollOnDatanodeDeath/15c8475d34a09d36f3534cee8c1acda7 2023-05-31 13:54:12,400 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 15c8475d34a09d36f3534cee8c1acda7 2023-05-31 13:54:12,404 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/default/TestLogRolling-testLogRollOnDatanodeDeath/15c8475d34a09d36f3534cee8c1acda7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 13:54:12,405 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 15c8475d34a09d36f3534cee8c1acda7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=761651, jitterRate=-0.03151172399520874}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 13:54:12,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 15c8475d34a09d36f3534cee8c1acda7: 2023-05-31 13:54:12,407 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnDatanodeDeath,,1685541251991.15c8475d34a09d36f3534cee8c1acda7., pid=11, masterSystemTime=1685541252371 2023-05-31 13:54:12,412 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnDatanodeDeath,,1685541251991.15c8475d34a09d36f3534cee8c1acda7. 2023-05-31 13:54:12,412 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnDatanodeDeath,,1685541251991.15c8475d34a09d36f3534cee8c1acda7. 2023-05-31 13:54:12,414 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=15c8475d34a09d36f3534cee8c1acda7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,36457,1685541251905 2023-05-31 13:54:12,414 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685541251991.15c8475d34a09d36f3534cee8c1acda7.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685541252414"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685541252414"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685541252414"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685541252414"}]},"ts":"1685541252414"} 2023-05-31 13:54:12,420 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-31 13:54:12,421 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 15c8475d34a09d36f3534cee8c1acda7, server=jenkins-hbase17.apache.org,36457,1685541251905 in 203 msec 2023-05-31 13:54:12,424 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-31 13:54:12,424 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=15c8475d34a09d36f3534cee8c1acda7, ASSIGN in 373 msec 2023-05-31 13:54:12,425 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 13:54:12,425 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685541252425"}]},"ts":"1685541252425"} 2023-05-31 13:54:12,426 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLED in hbase:meta 2023-05-31 13:54:12,429 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 13:54:12,431 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath in 438 msec 2023-05-31 13:54:14,578 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-31 13:54:16,793 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-31 13:54:16,794 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-31 13:54:17,957 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnDatanodeDeath' 2023-05-31 13:54:21,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33819] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 13:54:21,998 INFO [Listener at localhost.localdomain/37517] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnDatanodeDeath, procId: 9 completed 2023-05-31 13:54:22,001 DEBUG [Listener at localhost.localdomain/37517] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnDatanodeDeath 2023-05-31 13:54:22,001 DEBUG [Listener at localhost.localdomain/37517] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnDatanodeDeath,,1685541251991.15c8475d34a09d36f3534cee8c1acda7. 2023-05-31 13:54:22,013 WARN [Listener at localhost.localdomain/37517] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 13:54:22,015 WARN [Listener at localhost.localdomain/37517] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:54:22,017 INFO [Listener at localhost.localdomain/37517] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:54:22,022 INFO [Listener at localhost.localdomain/37517] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/java.io.tmpdir/Jetty_localhost_38337_datanode____9zmsl5/webapp 2023-05-31 13:54:22,098 INFO [Listener at localhost.localdomain/37517] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38337 2023-05-31 13:54:22,108 WARN [Listener at localhost.localdomain/44559] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:54:22,130 WARN [Listener at localhost.localdomain/44559] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 13:54:22,134 WARN [Listener at localhost.localdomain/44559] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:54:22,136 INFO [Listener at localhost.localdomain/44559] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:54:22,143 INFO [Listener at localhost.localdomain/44559] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/java.io.tmpdir/Jetty_localhost_34009_datanode____.fnkzq9/webapp 2023-05-31 13:54:22,191 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2e0d291b1b25c875: Processing first storage report for DS-d97b2366-cf0f-438d-8abf-56bda5743477 from datanode 08789123-0680-482c-aebb-b97664e61532 2023-05-31 13:54:22,191 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2e0d291b1b25c875: from storage DS-d97b2366-cf0f-438d-8abf-56bda5743477 node DatanodeRegistration(127.0.0.1:41171, datanodeUuid=08789123-0680-482c-aebb-b97664e61532, infoPort=37333, infoSecurePort=0, ipcPort=44559, storageInfo=lv=-57;cid=testClusterID;nsid=777904125;c=1685541249766), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:54:22,191 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2e0d291b1b25c875: Processing first storage report for DS-308ce5a4-07a9-4b90-9cdc-297b4a6b3efb from datanode 08789123-0680-482c-aebb-b97664e61532 2023-05-31 13:54:22,191 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2e0d291b1b25c875: from storage DS-308ce5a4-07a9-4b90-9cdc-297b4a6b3efb node DatanodeRegistration(127.0.0.1:41171, datanodeUuid=08789123-0680-482c-aebb-b97664e61532, infoPort=37333, infoSecurePort=0, ipcPort=44559, storageInfo=lv=-57;cid=testClusterID;nsid=777904125;c=1685541249766), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:54:22,230 INFO [Listener at localhost.localdomain/44559] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34009 2023-05-31 13:54:22,308 WARN [Listener at localhost.localdomain/45661] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:54:22,347 WARN [Listener at localhost.localdomain/45661] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 13:54:22,352 WARN [Listener at localhost.localdomain/45661] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:54:22,354 INFO [Listener at localhost.localdomain/45661] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:54:22,359 INFO [Listener at localhost.localdomain/45661] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/java.io.tmpdir/Jetty_localhost_41809_datanode____if528b/webapp 2023-05-31 13:54:22,411 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb13a106dad02d379: Processing first storage report for DS-a2686f79-d96e-4b98-8650-99e4be789691 from datanode 0621dc26-cf8f-4af7-b137-cf0a2cc34e05 2023-05-31 13:54:22,411 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb13a106dad02d379: from storage DS-a2686f79-d96e-4b98-8650-99e4be789691 node DatanodeRegistration(127.0.0.1:37423, datanodeUuid=0621dc26-cf8f-4af7-b137-cf0a2cc34e05, infoPort=41439, infoSecurePort=0, ipcPort=45661, storageInfo=lv=-57;cid=testClusterID;nsid=777904125;c=1685541249766), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:54:22,411 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb13a106dad02d379: Processing first storage report for DS-57126da7-156c-466d-91ee-52d804f2076e from datanode 0621dc26-cf8f-4af7-b137-cf0a2cc34e05 2023-05-31 13:54:22,411 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb13a106dad02d379: from storage DS-57126da7-156c-466d-91ee-52d804f2076e node DatanodeRegistration(127.0.0.1:37423, datanodeUuid=0621dc26-cf8f-4af7-b137-cf0a2cc34e05, infoPort=41439, infoSecurePort=0, ipcPort=45661, storageInfo=lv=-57;cid=testClusterID;nsid=777904125;c=1685541249766), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:54:22,455 INFO [Listener at localhost.localdomain/45661] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41809 2023-05-31 13:54:22,467 WARN [Listener at localhost.localdomain/40601] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:54:22,545 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x84edd33055962a88: Processing first storage report for DS-7e2fc5cb-1b39-4b2b-aed8-b21b2bce2cdb from datanode 7e3baea3-6eb8-4435-b6a4-b108a09a0899 2023-05-31 13:54:22,545 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x84edd33055962a88: from storage DS-7e2fc5cb-1b39-4b2b-aed8-b21b2bce2cdb node DatanodeRegistration(127.0.0.1:35261, datanodeUuid=7e3baea3-6eb8-4435-b6a4-b108a09a0899, infoPort=46443, infoSecurePort=0, ipcPort=40601, storageInfo=lv=-57;cid=testClusterID;nsid=777904125;c=1685541249766), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-31 13:54:22,545 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x84edd33055962a88: Processing first storage report for DS-ae678f05-5a02-4d83-8a17-d98e8e58d936 from datanode 7e3baea3-6eb8-4435-b6a4-b108a09a0899 2023-05-31 13:54:22,545 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x84edd33055962a88: from storage DS-ae678f05-5a02-4d83-8a17-d98e8e58d936 node DatanodeRegistration(127.0.0.1:35261, datanodeUuid=7e3baea3-6eb8-4435-b6a4-b108a09a0899, infoPort=46443, infoSecurePort=0, ipcPort=40601, storageInfo=lv=-57;cid=testClusterID;nsid=777904125;c=1685541249766), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:54:22,583 WARN [Listener at localhost.localdomain/40601] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 13:54:22,586 WARN [ResponseProcessor for block BP-1953963714-136.243.18.41-1685541249766:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1953963714-136.243.18.41-1685541249766:blk_1073741829_1005 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 13:54:22,586 WARN [DataStreamer for file /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/MasterData/WALs/jenkins-hbase17.apache.org,33819,1685541250347/jenkins-hbase17.apache.org%2C33819%2C1685541250347.1685541250567 block BP-1953963714-136.243.18.41-1685541249766:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-1953963714-136.243.18.41-1685541249766:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:37969,DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4,DISK], DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:37969,DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4,DISK]) is bad. 2023-05-31 13:54:22,590 WARN [ResponseProcessor for block BP-1953963714-136.243.18.41-1685541249766:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1953963714-136.243.18.41-1685541249766:blk_1073741833_1009 java.io.IOException: Bad response ERROR for BP-1953963714-136.243.18.41-1685541249766:blk_1073741833_1009 from datanode DatanodeInfoWithStorage[127.0.0.1:37969,DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-31 13:54:22,591 WARN [ResponseProcessor for block BP-1953963714-136.243.18.41-1685541249766:blk_1073741838_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1953963714-136.243.18.41-1685541249766:blk_1073741838_1014 java.io.IOException: Bad response ERROR for BP-1953963714-136.243.18.41-1685541249766:blk_1073741838_1014 from datanode DatanodeInfoWithStorage[127.0.0.1:37969,DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-31 13:54:22,592 WARN [PacketResponder: BP-1953963714-136.243.18.41-1685541249766:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:37969]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:22,592 WARN [DataStreamer for file /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36801,1685541250424/jenkins-hbase17.apache.org%2C36801%2C1685541250424.meta.1685541251174.meta block BP-1953963714-136.243.18.41-1685541249766:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-1953963714-136.243.18.41-1685541249766:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK], DatanodeInfoWithStorage[127.0.0.1:37969,DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:37969,DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4,DISK]) is bad. 2023-05-31 13:54:22,593 WARN [DataStreamer for file /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541252096 block BP-1953963714-136.243.18.41-1685541249766:blk_1073741838_1014] hdfs.DataStreamer(1548): Error Recovery for BP-1953963714-136.243.18.41-1685541249766:blk_1073741838_1014 in pipeline [DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK], DatanodeInfoWithStorage[127.0.0.1:37969,DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:37969,DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4,DISK]) is bad. 2023-05-31 13:54:22,593 WARN [PacketResponder: BP-1953963714-136.243.18.41-1685541249766:blk_1073741838_1014, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:37969]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:22,597 WARN [ResponseProcessor for block BP-1953963714-136.243.18.41-1685541249766:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1953963714-136.243.18.41-1685541249766:blk_1073741832_1008 java.io.IOException: Bad response ERROR for BP-1953963714-136.243.18.41-1685541249766:blk_1073741832_1008 from datanode DatanodeInfoWithStorage[127.0.0.1:37969,DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-31 13:54:22,597 WARN [DataStreamer for file /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36801,1685541250424/jenkins-hbase17.apache.org%2C36801%2C1685541250424.1685541250967 block BP-1953963714-136.243.18.41-1685541249766:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-1953963714-136.243.18.41-1685541249766:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK], DatanodeInfoWithStorage[127.0.0.1:37969,DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:37969,DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4,DISK]) is bad. 2023-05-31 13:54:22,597 WARN [PacketResponder: BP-1953963714-136.243.18.41-1685541249766:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:37969]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:22,600 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-500984276_17 at /127.0.0.1:47428 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:45811:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47428 dst: /127.0.0.1:45811 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:22,600 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1864621655_17 at /127.0.0.1:47386 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:45811:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47386 dst: /127.0.0.1:45811 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:22,609 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1864621655_17 at /127.0.0.1:47382 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:45811:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47382 dst: /127.0.0.1:45811 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:22,614 INFO [Listener at localhost.localdomain/40601] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 13:54:22,618 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_468775582_17 at /127.0.0.1:47366 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:45811:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47366 dst: /127.0.0.1:45811 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:45811 remote=/127.0.0.1:47366]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:22,621 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1864621655_17 at /127.0.0.1:54334 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:37969:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54334 dst: /127.0.0.1:37969 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:22,622 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1864621655_17 at /127.0.0.1:54348 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:37969:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54348 dst: /127.0.0.1:37969 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:22,622 WARN [BP-1953963714-136.243.18.41-1685541249766 heartbeating to localhost.localdomain/127.0.0.1:34425] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 13:54:22,622 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_468775582_17 at /127.0.0.1:54308 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:37969:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54308 dst: /127.0.0.1:37969 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:22,622 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-500984276_17 at /127.0.0.1:54394 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:37969:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54394 dst: /127.0.0.1:37969 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:22,627 WARN [BP-1953963714-136.243.18.41-1685541249766 heartbeating to localhost.localdomain/127.0.0.1:34425] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1953963714-136.243.18.41-1685541249766 (Datanode Uuid b3cbc322-f99d-488f-af11-788f393b0fa3) service to localhost.localdomain/127.0.0.1:34425 2023-05-31 13:54:22,630 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/dfs/data/data3/current/BP-1953963714-136.243.18.41-1685541249766] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:54:22,634 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/dfs/data/data4/current/BP-1953963714-136.243.18.41-1685541249766] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:54:22,637 WARN [Listener at localhost.localdomain/40601] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 13:54:22,637 WARN [ResponseProcessor for block BP-1953963714-136.243.18.41-1685541249766:blk_1073741838_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1953963714-136.243.18.41-1685541249766:blk_1073741838_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 13:54:22,638 WARN [ResponseProcessor for block BP-1953963714-136.243.18.41-1685541249766:blk_1073741833_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1953963714-136.243.18.41-1685541249766:blk_1073741833_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 13:54:22,639 WARN [ResponseProcessor for block BP-1953963714-136.243.18.41-1685541249766:blk_1073741832_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1953963714-136.243.18.41-1685541249766:blk_1073741832_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 13:54:22,639 WARN [ResponseProcessor for block BP-1953963714-136.243.18.41-1685541249766:blk_1073741829_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1953963714-136.243.18.41-1685541249766:blk_1073741829_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 13:54:22,654 INFO [Listener at localhost.localdomain/40601] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 13:54:22,757 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_468775582_17 at /127.0.0.1:38552 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:45811:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38552 dst: /127.0.0.1:45811 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:22,758 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1864621655_17 at /127.0.0.1:38568 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:45811:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38568 dst: /127.0.0.1:45811 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:22,758 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-500984276_17 at /127.0.0.1:38554 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:45811:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38554 dst: /127.0.0.1:45811 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:22,758 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1864621655_17 at /127.0.0.1:38556 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:45811:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38556 dst: /127.0.0.1:45811 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:22,759 WARN [BP-1953963714-136.243.18.41-1685541249766 heartbeating to localhost.localdomain/127.0.0.1:34425] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 13:54:22,759 WARN [BP-1953963714-136.243.18.41-1685541249766 heartbeating to localhost.localdomain/127.0.0.1:34425] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1953963714-136.243.18.41-1685541249766 (Datanode Uuid b6ea9468-0ffe-4e08-a4bc-8d189d9940f2) service to localhost.localdomain/127.0.0.1:34425 2023-05-31 13:54:22,761 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/dfs/data/data1/current/BP-1953963714-136.243.18.41-1685541249766] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:54:22,762 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/dfs/data/data2/current/BP-1953963714-136.243.18.41-1685541249766] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:54:22,768 DEBUG [Listener at localhost.localdomain/40601] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 13:54:22,771 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:34868, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 13:54:22,772 WARN [RS:1;jenkins-hbase17:36457.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=4, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:54:22,773 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C36457%2C1685541251905:(num 1685541252096) roll requested 2023-05-31 13:54:22,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36457] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:54:22,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36457] ipc.CallRunner(144): callId: 9 service: ClientService methodName: Mutate size: 1.2 K connection: 136.243.18.41:34868 deadline: 1685541272771, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-05-31 13:54:22,781 WARN [Thread-627] hdfs.DataStreamer(1658): Abandoning BP-1953963714-136.243.18.41-1685541249766:blk_1073741839_1019 2023-05-31 13:54:22,784 WARN [Thread-627] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK] 2023-05-31 13:54:22,796 WARN [regionserver/jenkins-hbase17:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-05-31 13:54:22,796 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541252096 with entries=1, filesize=467 B; new WAL /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541262773 2023-05-31 13:54:22,797 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37423,DS-a2686f79-d96e-4b98-8650-99e4be789691,DISK], DatanodeInfoWithStorage[127.0.0.1:41171,DS-d97b2366-cf0f-438d-8abf-56bda5743477,DISK]] 2023-05-31 13:54:22,797 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541252096 is not closed yet, will try archiving it next time 2023-05-31 13:54:22,797 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:54:22,797 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541252096; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:54:22,800 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541252096 to hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/oldWALs/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541252096 2023-05-31 13:54:34,850 INFO [Listener at localhost.localdomain/40601] wal.TestLogRolling(375): log.getCurrentFileName(): hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541262773 2023-05-31 13:54:34,851 WARN [Listener at localhost.localdomain/40601] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 13:54:34,853 WARN [ResponseProcessor for block BP-1953963714-136.243.18.41-1685541249766:blk_1073741840_1020] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1953963714-136.243.18.41-1685541249766:blk_1073741840_1020 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 13:54:34,853 WARN [DataStreamer for file /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541262773 block BP-1953963714-136.243.18.41-1685541249766:blk_1073741840_1020] hdfs.DataStreamer(1548): Error Recovery for BP-1953963714-136.243.18.41-1685541249766:blk_1073741840_1020 in pipeline [DatanodeInfoWithStorage[127.0.0.1:37423,DS-a2686f79-d96e-4b98-8650-99e4be789691,DISK], DatanodeInfoWithStorage[127.0.0.1:41171,DS-d97b2366-cf0f-438d-8abf-56bda5743477,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:37423,DS-a2686f79-d96e-4b98-8650-99e4be789691,DISK]) is bad. 2023-05-31 13:54:34,859 INFO [Listener at localhost.localdomain/40601] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 13:54:34,859 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-500984276_17 at /127.0.0.1:36884 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741840_1020]] datanode.DataXceiver(323): 127.0.0.1:41171:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:36884 dst: /127.0.0.1:41171 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:41171 remote=/127.0.0.1:36884]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:34,860 WARN [PacketResponder: BP-1953963714-136.243.18.41-1685541249766:blk_1073741840_1020, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:41171]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:34,862 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-500984276_17 at /127.0.0.1:57674 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741840_1020]] datanode.DataXceiver(323): 127.0.0.1:37423:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:57674 dst: /127.0.0.1:37423 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:34,965 WARN [BP-1953963714-136.243.18.41-1685541249766 heartbeating to localhost.localdomain/127.0.0.1:34425] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 13:54:34,965 WARN [BP-1953963714-136.243.18.41-1685541249766 heartbeating to localhost.localdomain/127.0.0.1:34425] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1953963714-136.243.18.41-1685541249766 (Datanode Uuid 0621dc26-cf8f-4af7-b137-cf0a2cc34e05) service to localhost.localdomain/127.0.0.1:34425 2023-05-31 13:54:34,965 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/dfs/data/data7/current/BP-1953963714-136.243.18.41-1685541249766] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:54:34,965 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/dfs/data/data8/current/BP-1953963714-136.243.18.41-1685541249766] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:54:34,970 WARN [sync.3] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41171,DS-d97b2366-cf0f-438d-8abf-56bda5743477,DISK]] 2023-05-31 13:54:34,970 WARN [sync.3] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41171,DS-d97b2366-cf0f-438d-8abf-56bda5743477,DISK]] 2023-05-31 13:54:34,970 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C36457%2C1685541251905:(num 1685541262773) roll requested 2023-05-31 13:54:34,975 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-500984276_17 at /127.0.0.1:54458 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741841_1022]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/dfs/data/data5/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/dfs/data/data6/current]'}, localName='127.0.0.1:41171', datanodeUuid='08789123-0680-482c-aebb-b97664e61532', xmitsInProgress=0}:Exception transfering block BP-1953963714-136.243.18.41-1685541249766:blk_1073741841_1022 to mirror 127.0.0.1:37969: java.net.ConnectException: Connection refused 2023-05-31 13:54:34,975 WARN [Thread-637] hdfs.DataStreamer(1658): Abandoning BP-1953963714-136.243.18.41-1685541249766:blk_1073741841_1022 2023-05-31 13:54:34,975 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-500984276_17 at /127.0.0.1:54458 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741841_1022]] datanode.DataXceiver(323): 127.0.0.1:41171:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54458 dst: /127.0.0.1:41171 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:34,976 WARN [Thread-637] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37969,DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4,DISK] 2023-05-31 13:54:34,984 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541262773 with entries=2, filesize=2.36 KB; new WAL /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541274970 2023-05-31 13:54:34,985 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41171,DS-d97b2366-cf0f-438d-8abf-56bda5743477,DISK], DatanodeInfoWithStorage[127.0.0.1:35261,DS-7e2fc5cb-1b39-4b2b-aed8-b21b2bce2cdb,DISK]] 2023-05-31 13:54:34,985 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541262773 is not closed yet, will try archiving it next time 2023-05-31 13:54:37,204 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@1e510c2c] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:41171, datanodeUuid=08789123-0680-482c-aebb-b97664e61532, infoPort=37333, infoSecurePort=0, ipcPort=44559, storageInfo=lv=-57;cid=testClusterID;nsid=777904125;c=1685541249766):Failed to transfer BP-1953963714-136.243.18.41-1685541249766:blk_1073741840_1021 to 127.0.0.1:37969 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:38,974 WARN [Listener at localhost.localdomain/40601] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 13:54:38,976 WARN [ResponseProcessor for block BP-1953963714-136.243.18.41-1685541249766:blk_1073741842_1023] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1953963714-136.243.18.41-1685541249766:blk_1073741842_1023 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 13:54:38,976 WARN [DataStreamer for file /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541274970 block BP-1953963714-136.243.18.41-1685541249766:blk_1073741842_1023] hdfs.DataStreamer(1548): Error Recovery for BP-1953963714-136.243.18.41-1685541249766:blk_1073741842_1023 in pipeline [DatanodeInfoWithStorage[127.0.0.1:41171,DS-d97b2366-cf0f-438d-8abf-56bda5743477,DISK], DatanodeInfoWithStorage[127.0.0.1:35261,DS-7e2fc5cb-1b39-4b2b-aed8-b21b2bce2cdb,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:41171,DS-d97b2366-cf0f-438d-8abf-56bda5743477,DISK]) is bad. 2023-05-31 13:54:38,980 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-500984276_17 at /127.0.0.1:58814 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741842_1023]] datanode.DataXceiver(323): 127.0.0.1:35261:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58814 dst: /127.0.0.1:35261 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:35261 remote=/127.0.0.1:58814]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:38,981 INFO [Listener at localhost.localdomain/40601] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 13:54:38,981 WARN [PacketResponder: BP-1953963714-136.243.18.41-1685541249766:blk_1073741842_1023, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:35261]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:38,983 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-500984276_17 at /127.0.0.1:54474 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741842_1023]] datanode.DataXceiver(323): 127.0.0.1:41171:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54474 dst: /127.0.0.1:41171 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:39,087 WARN [BP-1953963714-136.243.18.41-1685541249766 heartbeating to localhost.localdomain/127.0.0.1:34425] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 13:54:39,087 WARN [BP-1953963714-136.243.18.41-1685541249766 heartbeating to localhost.localdomain/127.0.0.1:34425] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1953963714-136.243.18.41-1685541249766 (Datanode Uuid 08789123-0680-482c-aebb-b97664e61532) service to localhost.localdomain/127.0.0.1:34425 2023-05-31 13:54:39,088 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/dfs/data/data5/current/BP-1953963714-136.243.18.41-1685541249766] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:54:39,088 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/dfs/data/data6/current/BP-1953963714-136.243.18.41-1685541249766] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:54:39,095 WARN [sync.1] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:35261,DS-7e2fc5cb-1b39-4b2b-aed8-b21b2bce2cdb,DISK]] 2023-05-31 13:54:39,095 WARN [sync.1] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:35261,DS-7e2fc5cb-1b39-4b2b-aed8-b21b2bce2cdb,DISK]] 2023-05-31 13:54:39,095 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C36457%2C1685541251905:(num 1685541274970) roll requested 2023-05-31 13:54:39,100 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-500984276_17 at /127.0.0.1:47572 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741843_1025]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/dfs/data/data10/current]'}, localName='127.0.0.1:35261', datanodeUuid='7e3baea3-6eb8-4435-b6a4-b108a09a0899', xmitsInProgress=0}:Exception transfering block BP-1953963714-136.243.18.41-1685541249766:blk_1073741843_1025 to mirror 127.0.0.1:45811: java.net.ConnectException: Connection refused 2023-05-31 13:54:39,100 WARN [Thread-649] hdfs.DataStreamer(1658): Abandoning BP-1953963714-136.243.18.41-1685541249766:blk_1073741843_1025 2023-05-31 13:54:39,100 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-500984276_17 at /127.0.0.1:47572 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741843_1025]] datanode.DataXceiver(323): 127.0.0.1:35261:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47572 dst: /127.0.0.1:35261 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:39,100 WARN [Thread-649] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK] 2023-05-31 13:54:39,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36457] regionserver.HRegion(9158): Flush requested on 15c8475d34a09d36f3534cee8c1acda7 2023-05-31 13:54:39,101 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 15c8475d34a09d36f3534cee8c1acda7 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 13:54:39,103 WARN [Thread-649] hdfs.DataStreamer(1658): Abandoning BP-1953963714-136.243.18.41-1685541249766:blk_1073741844_1026 2023-05-31 13:54:39,104 WARN [Thread-649] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41171,DS-d97b2366-cf0f-438d-8abf-56bda5743477,DISK] 2023-05-31 13:54:39,107 WARN [Thread-649] hdfs.DataStreamer(1658): Abandoning BP-1953963714-136.243.18.41-1685541249766:blk_1073741845_1027 2023-05-31 13:54:39,108 WARN [Thread-649] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37969,DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4,DISK] 2023-05-31 13:54:39,109 WARN [Thread-651] hdfs.DataStreamer(1658): Abandoning BP-1953963714-136.243.18.41-1685541249766:blk_1073741846_1028 2023-05-31 13:54:39,109 WARN [Thread-649] hdfs.DataStreamer(1658): Abandoning BP-1953963714-136.243.18.41-1685541249766:blk_1073741847_1029 2023-05-31 13:54:39,110 WARN [Thread-651] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK] 2023-05-31 13:54:39,110 WARN [Thread-649] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37423,DS-a2686f79-d96e-4b98-8650-99e4be789691,DISK] 2023-05-31 13:54:39,111 WARN [IPC Server handler 3 on default port 34425] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-31 13:54:39,111 WARN [IPC Server handler 3 on default port 34425] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-31 13:54:39,111 WARN [IPC Server handler 3 on default port 34425] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-31 13:54:39,111 WARN [Thread-651] hdfs.DataStreamer(1658): Abandoning BP-1953963714-136.243.18.41-1685541249766:blk_1073741848_1030 2023-05-31 13:54:39,112 WARN [Thread-651] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37969,DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4,DISK] 2023-05-31 13:54:39,114 WARN [Thread-651] hdfs.DataStreamer(1658): Abandoning BP-1953963714-136.243.18.41-1685541249766:blk_1073741850_1032 2023-05-31 13:54:39,114 WARN [Thread-651] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37423,DS-a2686f79-d96e-4b98-8650-99e4be789691,DISK] 2023-05-31 13:54:39,120 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-500984276_17 at /127.0.0.1:47582 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741851_1033]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/dfs/data/data10/current]'}, localName='127.0.0.1:35261', datanodeUuid='7e3baea3-6eb8-4435-b6a4-b108a09a0899', xmitsInProgress=0}:Exception transfering block BP-1953963714-136.243.18.41-1685541249766:blk_1073741851_1033 to mirror 127.0.0.1:41171: java.net.ConnectException: Connection refused 2023-05-31 13:54:39,120 WARN [Thread-651] hdfs.DataStreamer(1658): Abandoning BP-1953963714-136.243.18.41-1685541249766:blk_1073741851_1033 2023-05-31 13:54:39,121 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-500984276_17 at /127.0.0.1:47582 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741851_1033]] datanode.DataXceiver(323): 127.0.0.1:35261:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47582 dst: /127.0.0.1:35261 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:39,122 WARN [Thread-651] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41171,DS-d97b2366-cf0f-438d-8abf-56bda5743477,DISK] 2023-05-31 13:54:39,122 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541274970 with entries=11, filesize=11.81 KB; new WAL /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541279095 2023-05-31 13:54:39,122 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35261,DS-7e2fc5cb-1b39-4b2b-aed8-b21b2bce2cdb,DISK]] 2023-05-31 13:54:39,122 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541274970 is not closed yet, will try archiving it next time 2023-05-31 13:54:39,123 WARN [IPC Server handler 4 on default port 34425] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-31 13:54:39,123 WARN [IPC Server handler 4 on default port 34425] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-31 13:54:39,124 WARN [IPC Server handler 4 on default port 34425] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-31 13:54:39,126 WARN [sync.1] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:35261,DS-7e2fc5cb-1b39-4b2b-aed8-b21b2bce2cdb,DISK]] 2023-05-31 13:54:39,126 WARN [sync.1] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:35261,DS-7e2fc5cb-1b39-4b2b-aed8-b21b2bce2cdb,DISK]] 2023-05-31 13:54:39,126 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C36457%2C1685541251905:(num 1685541279095) roll requested 2023-05-31 13:54:39,133 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-500984276_17 at /127.0.0.1:47588 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741853_1035]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/dfs/data/data10/current]'}, localName='127.0.0.1:35261', datanodeUuid='7e3baea3-6eb8-4435-b6a4-b108a09a0899', xmitsInProgress=0}:Exception transfering block BP-1953963714-136.243.18.41-1685541249766:blk_1073741853_1035 to mirror 127.0.0.1:37969: java.net.ConnectException: Connection refused 2023-05-31 13:54:39,133 WARN [Thread-657] hdfs.DataStreamer(1658): Abandoning BP-1953963714-136.243.18.41-1685541249766:blk_1073741853_1035 2023-05-31 13:54:39,133 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-500984276_17 at /127.0.0.1:47588 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741853_1035]] datanode.DataXceiver(323): 127.0.0.1:35261:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47588 dst: /127.0.0.1:35261 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:39,134 WARN [Thread-657] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37969,DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4,DISK] 2023-05-31 13:54:39,135 WARN [Thread-657] hdfs.DataStreamer(1658): Abandoning BP-1953963714-136.243.18.41-1685541249766:blk_1073741854_1036 2023-05-31 13:54:39,136 WARN [Thread-657] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37423,DS-a2686f79-d96e-4b98-8650-99e4be789691,DISK] 2023-05-31 13:54:39,137 WARN [Thread-657] hdfs.DataStreamer(1658): Abandoning BP-1953963714-136.243.18.41-1685541249766:blk_1073741855_1037 2023-05-31 13:54:39,137 WARN [Thread-657] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK] 2023-05-31 13:54:39,140 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-500984276_17 at /127.0.0.1:47600 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741856_1038]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/dfs/data/data10/current]'}, localName='127.0.0.1:35261', datanodeUuid='7e3baea3-6eb8-4435-b6a4-b108a09a0899', xmitsInProgress=0}:Exception transfering block BP-1953963714-136.243.18.41-1685541249766:blk_1073741856_1038 to mirror 127.0.0.1:41171: java.net.ConnectException: Connection refused 2023-05-31 13:54:39,140 WARN [Thread-657] hdfs.DataStreamer(1658): Abandoning BP-1953963714-136.243.18.41-1685541249766:blk_1073741856_1038 2023-05-31 13:54:39,140 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-500984276_17 at /127.0.0.1:47600 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741856_1038]] datanode.DataXceiver(323): 127.0.0.1:35261:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47600 dst: /127.0.0.1:35261 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:39,141 WARN [Thread-657] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41171,DS-d97b2366-cf0f-438d-8abf-56bda5743477,DISK] 2023-05-31 13:54:39,142 WARN [IPC Server handler 0 on default port 34425] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-31 13:54:39,142 WARN [IPC Server handler 0 on default port 34425] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-31 13:54:39,142 WARN [IPC Server handler 0 on default port 34425] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-31 13:54:39,147 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541279095 with entries=2, filesize=2.36 KB; new WAL /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541279126 2023-05-31 13:54:39,147 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35261,DS-7e2fc5cb-1b39-4b2b-aed8-b21b2bce2cdb,DISK]] 2023-05-31 13:54:39,147 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541274970 is not closed yet, will try archiving it next time 2023-05-31 13:54:39,147 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541279095 is not closed yet, will try archiving it next time 2023-05-31 13:54:39,332 WARN [sync.4] wal.FSHLog(757): Too many consecutive RollWriter requests, it's a sign of the total number of live datanodes is lower than the tolerable replicas. 2023-05-31 13:54:39,530 DEBUG [Close-WAL-Writer-0] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541279095 is not closed yet, will try archiving it next time 2023-05-31 13:54:39,534 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=12 (bloomFilter=true), to=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/default/TestLogRolling-testLogRollOnDatanodeDeath/15c8475d34a09d36f3534cee8c1acda7/.tmp/info/6cfd838b0acb4cb093c928373ec59945 2023-05-31 13:54:39,538 WARN [Listener at localhost.localdomain/40601] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 13:54:39,544 WARN [Listener at localhost.localdomain/40601] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:54:39,547 INFO [Listener at localhost.localdomain/40601] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:54:39,551 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/default/TestLogRolling-testLogRollOnDatanodeDeath/15c8475d34a09d36f3534cee8c1acda7/.tmp/info/6cfd838b0acb4cb093c928373ec59945 as hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/default/TestLogRolling-testLogRollOnDatanodeDeath/15c8475d34a09d36f3534cee8c1acda7/info/6cfd838b0acb4cb093c928373ec59945 2023-05-31 13:54:39,553 INFO [Listener at localhost.localdomain/40601] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/java.io.tmpdir/Jetty_localhost_36997_datanode____.frb4n9/webapp 2023-05-31 13:54:39,559 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/default/TestLogRolling-testLogRollOnDatanodeDeath/15c8475d34a09d36f3534cee8c1acda7/info/6cfd838b0acb4cb093c928373ec59945, entries=5, sequenceid=12, filesize=10.0 K 2023-05-31 13:54:39,560 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=8.40 KB/8606 for 15c8475d34a09d36f3534cee8c1acda7 in 459ms, sequenceid=12, compaction requested=false 2023-05-31 13:54:39,561 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 15c8475d34a09d36f3534cee8c1acda7: 2023-05-31 13:54:39,628 INFO [Listener at localhost.localdomain/40601] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36997 2023-05-31 13:54:39,635 WARN [Listener at localhost.localdomain/36633] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:54:39,713 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5e311738e2858ca7: Processing first storage report for DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4 from datanode b3cbc322-f99d-488f-af11-788f393b0fa3 2023-05-31 13:54:39,714 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5e311738e2858ca7: from storage DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4 node DatanodeRegistration(127.0.0.1:40977, datanodeUuid=b3cbc322-f99d-488f-af11-788f393b0fa3, infoPort=36397, infoSecurePort=0, ipcPort=36633, storageInfo=lv=-57;cid=testClusterID;nsid=777904125;c=1685541249766), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-31 13:54:39,714 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5e311738e2858ca7: Processing first storage report for DS-4a95328c-caec-402b-8308-82d21079cb6e from datanode b3cbc322-f99d-488f-af11-788f393b0fa3 2023-05-31 13:54:39,714 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5e311738e2858ca7: from storage DS-4a95328c-caec-402b-8308-82d21079cb6e node DatanodeRegistration(127.0.0.1:40977, datanodeUuid=b3cbc322-f99d-488f-af11-788f393b0fa3, infoPort=36397, infoSecurePort=0, ipcPort=36633, storageInfo=lv=-57;cid=testClusterID;nsid=777904125;c=1685541249766), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-31 13:54:40,548 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@5a05ab52] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:35261, datanodeUuid=7e3baea3-6eb8-4435-b6a4-b108a09a0899, infoPort=46443, infoSecurePort=0, ipcPort=40601, storageInfo=lv=-57;cid=testClusterID;nsid=777904125;c=1685541249766):Failed to transfer BP-1953963714-136.243.18.41-1685541249766:blk_1073741842_1024 to 127.0.0.1:37423 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:40,548 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@3093a08f] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:35261, datanodeUuid=7e3baea3-6eb8-4435-b6a4-b108a09a0899, infoPort=46443, infoSecurePort=0, ipcPort=40601, storageInfo=lv=-57;cid=testClusterID;nsid=777904125;c=1685541249766):Failed to transfer BP-1953963714-136.243.18.41-1685541249766:blk_1073741852_1034 to 127.0.0.1:37423 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:40,686 WARN [master/jenkins-hbase17:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:54:40,687 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C33819%2C1685541250347:(num 1685541250567) roll requested 2023-05-31 13:54:40,692 WARN [Thread-701] hdfs.DataStreamer(1658): Abandoning BP-1953963714-136.243.18.41-1685541249766:blk_1073741858_1040 2023-05-31 13:54:40,692 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:54:40,693 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:54:40,694 WARN [Thread-701] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37423,DS-a2686f79-d96e-4b98-8650-99e4be789691,DISK] 2023-05-31 13:54:40,695 WARN [Thread-701] hdfs.DataStreamer(1658): Abandoning BP-1953963714-136.243.18.41-1685541249766:blk_1073741859_1041 2023-05-31 13:54:40,696 WARN [Thread-701] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK] 2023-05-31 13:54:40,698 WARN [Thread-701] hdfs.DataStreamer(1658): Abandoning BP-1953963714-136.243.18.41-1685541249766:blk_1073741860_1042 2023-05-31 13:54:40,698 WARN [Thread-701] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41171,DS-d97b2366-cf0f-438d-8abf-56bda5743477,DISK] 2023-05-31 13:54:40,704 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-05-31 13:54:40,705 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/MasterData/WALs/jenkins-hbase17.apache.org,33819,1685541250347/jenkins-hbase17.apache.org%2C33819%2C1685541250347.1685541250567 with entries=88, filesize=43.75 KB; new WAL /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/MasterData/WALs/jenkins-hbase17.apache.org,33819,1685541250347/jenkins-hbase17.apache.org%2C33819%2C1685541250347.1685541280687 2023-05-31 13:54:40,706 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35261,DS-7e2fc5cb-1b39-4b2b-aed8-b21b2bce2cdb,DISK], DatanodeInfoWithStorage[127.0.0.1:40977,DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4,DISK]] 2023-05-31 13:54:40,706 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/MasterData/WALs/jenkins-hbase17.apache.org,33819,1685541250347/jenkins-hbase17.apache.org%2C33819%2C1685541250347.1685541250567 is not closed yet, will try archiving it next time 2023-05-31 13:54:40,706 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:54:40,707 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/MasterData/WALs/jenkins-hbase17.apache.org,33819,1685541250347/jenkins-hbase17.apache.org%2C33819%2C1685541250347.1685541250567; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:54:52,713 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@77808153] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40977, datanodeUuid=b3cbc322-f99d-488f-af11-788f393b0fa3, infoPort=36397, infoSecurePort=0, ipcPort=36633, storageInfo=lv=-57;cid=testClusterID;nsid=777904125;c=1685541249766):Failed to transfer BP-1953963714-136.243.18.41-1685541249766:blk_1073741837_1013 to 127.0.0.1:41171 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:55,713 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@72251f1f] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40977, datanodeUuid=b3cbc322-f99d-488f-af11-788f393b0fa3, infoPort=36397, infoSecurePort=0, ipcPort=36633, storageInfo=lv=-57;cid=testClusterID;nsid=777904125;c=1685541249766):Failed to transfer BP-1953963714-136.243.18.41-1685541249766:blk_1073741828_1004 to 127.0.0.1:41171 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:58,232 WARN [Thread-720] hdfs.DataStreamer(1658): Abandoning BP-1953963714-136.243.18.41-1685541249766:blk_1073741862_1044 2023-05-31 13:54:58,232 WARN [Thread-720] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41171,DS-d97b2366-cf0f-438d-8abf-56bda5743477,DISK] 2023-05-31 13:54:58,242 INFO [Listener at localhost.localdomain/36633] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541279126 with entries=2, filesize=1.57 KB; new WAL /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541298227 2023-05-31 13:54:58,242 DEBUG [Listener at localhost.localdomain/36633] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40977,DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4,DISK], DatanodeInfoWithStorage[127.0.0.1:35261,DS-7e2fc5cb-1b39-4b2b-aed8-b21b2bce2cdb,DISK]] 2023-05-31 13:54:58,242 DEBUG [Listener at localhost.localdomain/36633] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541279126 is not closed yet, will try archiving it next time 2023-05-31 13:54:58,242 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541262773 to hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/oldWALs/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541262773 2023-05-31 13:54:58,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36457] regionserver.HRegion(9158): Flush requested on 15c8475d34a09d36f3534cee8c1acda7 2023-05-31 13:54:58,248 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 15c8475d34a09d36f3534cee8c1acda7 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-05-31 13:54:58,250 INFO [sync.3] wal.FSHLog(774): LowReplication-Roller was enabled. 2023-05-31 13:54:58,259 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-500984276_17 at /127.0.0.1:42376 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741864_1046]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/dfs/data/data3/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/dfs/data/data4/current]'}, localName='127.0.0.1:40977', datanodeUuid='b3cbc322-f99d-488f-af11-788f393b0fa3', xmitsInProgress=0}:Exception transfering block BP-1953963714-136.243.18.41-1685541249766:blk_1073741864_1046 to mirror 127.0.0.1:41171: java.net.ConnectException: Connection refused 2023-05-31 13:54:58,259 WARN [Thread-727] hdfs.DataStreamer(1658): Abandoning BP-1953963714-136.243.18.41-1685541249766:blk_1073741864_1046 2023-05-31 13:54:58,259 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-500984276_17 at /127.0.0.1:42376 [Receiving block BP-1953963714-136.243.18.41-1685541249766:blk_1073741864_1046]] datanode.DataXceiver(323): 127.0.0.1:40977:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42376 dst: /127.0.0.1:40977 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:58,260 WARN [Thread-727] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41171,DS-d97b2366-cf0f-438d-8abf-56bda5743477,DISK] 2023-05-31 13:54:58,268 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-31 13:54:58,268 INFO [Listener at localhost.localdomain/36633] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-31 13:54:58,269 DEBUG [Listener at localhost.localdomain/36633] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x55a41fc1 to 127.0.0.1:57632 2023-05-31 13:54:58,269 DEBUG [Listener at localhost.localdomain/36633] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:54:58,269 DEBUG [Listener at localhost.localdomain/36633] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-31 13:54:58,269 DEBUG [Listener at localhost.localdomain/36633] util.JVMClusterUtil(257): Found active master hash=1814483288, stopped=false 2023-05-31 13:54:58,269 INFO [Listener at localhost.localdomain/36633] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,33819,1685541250347 2023-05-31 13:54:58,270 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 13:54:58,271 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:54:58,271 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): regionserver:36457-0x1008183e0240005, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 13:54:58,271 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008183e0240001, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 13:54:58,271 INFO [Listener at localhost.localdomain/36633] procedure2.ProcedureExecutor(629): Stopping 2023-05-31 13:54:58,271 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:54:58,272 DEBUG [Listener at localhost.localdomain/36633] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4403c638 to 127.0.0.1:57632 2023-05-31 13:54:58,272 DEBUG [Listener at localhost.localdomain/36633] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:54:58,272 INFO [Listener at localhost.localdomain/36633] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,36801,1685541250424' ***** 2023-05-31 13:54:58,273 INFO [Listener at localhost.localdomain/36633] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-31 13:54:58,273 INFO [Listener at localhost.localdomain/36633] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,36457,1685541251905' ***** 2023-05-31 13:54:58,273 INFO [Listener at localhost.localdomain/36633] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-31 13:54:58,273 INFO [RS:1;jenkins-hbase17:36457] regionserver.HeapMemoryManager(220): Stopping 2023-05-31 13:54:58,272 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=24 (bloomFilter=true), to=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/default/TestLogRolling-testLogRollOnDatanodeDeath/15c8475d34a09d36f3534cee8c1acda7/.tmp/info/8df01c3ec018471eb2bef4b2848dc6b8 2023-05-31 13:54:58,273 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36801-0x1008183e0240001, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:54:58,273 INFO [RS:0;jenkins-hbase17:36801] regionserver.HeapMemoryManager(220): Stopping 2023-05-31 13:54:58,273 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36457-0x1008183e0240005, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:54:58,273 INFO [RS:0;jenkins-hbase17:36801] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-31 13:54:58,273 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-31 13:54:58,273 INFO [RS:0;jenkins-hbase17:36801] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-31 13:54:58,274 INFO [RS:0;jenkins-hbase17:36801] regionserver.HRegionServer(3303): Received CLOSE for 1bb7196a0ec56257d44ee6fb4cf0d1e5 2023-05-31 13:54:58,274 INFO [RS:0;jenkins-hbase17:36801] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,36801,1685541250424 2023-05-31 13:54:58,274 DEBUG [RS:0;jenkins-hbase17:36801] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x390cd212 to 127.0.0.1:57632 2023-05-31 13:54:58,275 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1bb7196a0ec56257d44ee6fb4cf0d1e5, disabling compactions & flushes 2023-05-31 13:54:58,275 DEBUG [RS:0;jenkins-hbase17:36801] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:54:58,275 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5. 2023-05-31 13:54:58,275 INFO [RS:0;jenkins-hbase17:36801] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-31 13:54:58,275 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5. 2023-05-31 13:54:58,275 INFO [RS:0;jenkins-hbase17:36801] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-31 13:54:58,275 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5. after waiting 0 ms 2023-05-31 13:54:58,275 INFO [RS:0;jenkins-hbase17:36801] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-31 13:54:58,275 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5. 2023-05-31 13:54:58,275 INFO [RS:0;jenkins-hbase17:36801] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-31 13:54:58,276 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1bb7196a0ec56257d44ee6fb4cf0d1e5 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-31 13:54:58,276 WARN [RS:0;jenkins-hbase17:36801.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=7, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:54:58,276 INFO [RS:0;jenkins-hbase17:36801] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-05-31 13:54:58,277 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C36801%2C1685541250424:(num 1685541250967) roll requested 2023-05-31 13:54:58,277 DEBUG [RS:0;jenkins-hbase17:36801] regionserver.HRegionServer(1478): Online Regions={1bb7196a0ec56257d44ee6fb4cf0d1e5=hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5., 1588230740=hbase:meta,,1.1588230740} 2023-05-31 13:54:58,277 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1bb7196a0ec56257d44ee6fb4cf0d1e5: 2023-05-31 13:54:58,277 DEBUG [RS:0;jenkins-hbase17:36801] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-05-31 13:54:58,277 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase17.apache.org,36801,1685541250424: Unrecoverable exception while closing hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5. ***** org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:54:58,278 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 13:54:58,279 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-05-31 13:54:58,279 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 13:54:58,279 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 13:54:58,279 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 13:54:58,279 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 13:54:58,279 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 13:54:58,279 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-31 13:54:58,288 WARN [Thread-735] hdfs.DataStreamer(1658): Abandoning BP-1953963714-136.243.18.41-1685541249766:blk_1073741866_1048 2023-05-31 13:54:58,288 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-05-31 13:54:58,289 WARN [Thread-735] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41171,DS-d97b2366-cf0f-438d-8abf-56bda5743477,DISK] 2023-05-31 13:54:58,290 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-05-31 13:54:58,290 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-05-31 13:54:58,290 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-05-31 13:54:58,291 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "Verbose": false, "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1041235968, "init": 524288000, "max": 2051014656, "used": 332951384 }, "NonHeapMemoryUsage": { "committed": 134045696, "init": 2555904, "max": -1, "used": 131381200 }, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-05-31 13:54:58,293 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/default/TestLogRolling-testLogRollOnDatanodeDeath/15c8475d34a09d36f3534cee8c1acda7/.tmp/info/8df01c3ec018471eb2bef4b2848dc6b8 as hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/default/TestLogRolling-testLogRollOnDatanodeDeath/15c8475d34a09d36f3534cee8c1acda7/info/8df01c3ec018471eb2bef4b2848dc6b8 2023-05-31 13:54:58,300 WARN [regionserver/jenkins-hbase17:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL 2023-05-31 13:54:58,300 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36801,1685541250424/jenkins-hbase17.apache.org%2C36801%2C1685541250424.1685541250967 with entries=3, filesize=601 B; new WAL /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36801,1685541250424/jenkins-hbase17.apache.org%2C36801%2C1685541250424.1685541298277 2023-05-31 13:54:58,300 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40977,DS-b7b2a20e-a0be-4061-a28b-7df9bc6fa9a4,DISK], DatanodeInfoWithStorage[127.0.0.1:35261,DS-7e2fc5cb-1b39-4b2b-aed8-b21b2bce2cdb,DISK]] 2023-05-31 13:54:58,301 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:54:58,301 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36801,1685541250424/jenkins-hbase17.apache.org%2C36801%2C1685541250424.1685541250967 is not closed yet, will try archiving it next time 2023-05-31 13:54:58,301 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36801,1685541250424/jenkins-hbase17.apache.org%2C36801%2C1685541250424.1685541250967; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:54:58,304 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33819] master.MasterRpcServices(609): jenkins-hbase17.apache.org,36801,1685541250424 reported a fatal error: ***** ABORTING region server jenkins-hbase17.apache.org,36801,1685541250424: Unrecoverable exception while closing hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5. ***** Cause: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:54:58,307 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/default/TestLogRolling-testLogRollOnDatanodeDeath/15c8475d34a09d36f3534cee8c1acda7/info/8df01c3ec018471eb2bef4b2848dc6b8, entries=8, sequenceid=24, filesize=13.2 K 2023-05-31 13:54:58,308 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9682, heapSize ~10.36 KB/10608, currentSize=9.46 KB/9684 for 15c8475d34a09d36f3534cee8c1acda7 in 60ms, sequenceid=24, compaction requested=false 2023-05-31 13:54:58,308 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 15c8475d34a09d36f3534cee8c1acda7: 2023-05-31 13:54:58,308 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=23.2 K, sizeToCheck=16.0 K 2023-05-31 13:54:58,308 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 13:54:58,308 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/default/TestLogRolling-testLogRollOnDatanodeDeath/15c8475d34a09d36f3534cee8c1acda7/info/8df01c3ec018471eb2bef4b2848dc6b8 because midkey is the same as first or last row 2023-05-31 13:54:58,308 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-31 13:54:58,309 INFO [RS:1;jenkins-hbase17:36457] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-31 13:54:58,309 INFO [RS:1;jenkins-hbase17:36457] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-31 13:54:58,309 INFO [RS:1;jenkins-hbase17:36457] regionserver.HRegionServer(3303): Received CLOSE for 15c8475d34a09d36f3534cee8c1acda7 2023-05-31 13:54:58,310 INFO [RS:1;jenkins-hbase17:36457] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,36457,1685541251905 2023-05-31 13:54:58,310 DEBUG [RS:1;jenkins-hbase17:36457] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x77fd7911 to 127.0.0.1:57632 2023-05-31 13:54:58,310 DEBUG [RS:1;jenkins-hbase17:36457] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:54:58,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 15c8475d34a09d36f3534cee8c1acda7, disabling compactions & flushes 2023-05-31 13:54:58,310 INFO [RS:1;jenkins-hbase17:36457] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-05-31 13:54:58,310 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685541251991.15c8475d34a09d36f3534cee8c1acda7. 2023-05-31 13:54:58,310 DEBUG [RS:1;jenkins-hbase17:36457] regionserver.HRegionServer(1478): Online Regions={15c8475d34a09d36f3534cee8c1acda7=TestLogRolling-testLogRollOnDatanodeDeath,,1685541251991.15c8475d34a09d36f3534cee8c1acda7.} 2023-05-31 13:54:58,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685541251991.15c8475d34a09d36f3534cee8c1acda7. 2023-05-31 13:54:58,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685541251991.15c8475d34a09d36f3534cee8c1acda7. after waiting 0 ms 2023-05-31 13:54:58,310 DEBUG [RS:1;jenkins-hbase17:36457] regionserver.HRegionServer(1504): Waiting on 15c8475d34a09d36f3534cee8c1acda7 2023-05-31 13:54:58,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685541251991.15c8475d34a09d36f3534cee8c1acda7. 2023-05-31 13:54:58,310 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 15c8475d34a09d36f3534cee8c1acda7 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-05-31 13:54:58,316 WARN [Thread-744] hdfs.DataStreamer(1658): Abandoning BP-1953963714-136.243.18.41-1685541249766:blk_1073741868_1050 2023-05-31 13:54:58,317 WARN [Thread-744] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41171,DS-d97b2366-cf0f-438d-8abf-56bda5743477,DISK] 2023-05-31 13:54:58,330 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=36 (bloomFilter=true), to=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/default/TestLogRolling-testLogRollOnDatanodeDeath/15c8475d34a09d36f3534cee8c1acda7/.tmp/info/194779795e77487b99888abd6164a296 2023-05-31 13:54:58,338 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/default/TestLogRolling-testLogRollOnDatanodeDeath/15c8475d34a09d36f3534cee8c1acda7/.tmp/info/194779795e77487b99888abd6164a296 as hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/default/TestLogRolling-testLogRollOnDatanodeDeath/15c8475d34a09d36f3534cee8c1acda7/info/194779795e77487b99888abd6164a296 2023-05-31 13:54:58,344 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/default/TestLogRolling-testLogRollOnDatanodeDeath/15c8475d34a09d36f3534cee8c1acda7/info/194779795e77487b99888abd6164a296, entries=9, sequenceid=36, filesize=14.2 K 2023-05-31 13:54:58,346 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=0 B/0 for 15c8475d34a09d36f3534cee8c1acda7 in 36ms, sequenceid=36, compaction requested=true 2023-05-31 13:54:58,355 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/data/default/TestLogRolling-testLogRollOnDatanodeDeath/15c8475d34a09d36f3534cee8c1acda7/recovered.edits/39.seqid, newMaxSeqId=39, maxSeqId=1 2023-05-31 13:54:58,356 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685541251991.15c8475d34a09d36f3534cee8c1acda7. 2023-05-31 13:54:58,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 15c8475d34a09d36f3534cee8c1acda7: 2023-05-31 13:54:58,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685541251991.15c8475d34a09d36f3534cee8c1acda7. 2023-05-31 13:54:58,477 INFO [RS:0;jenkins-hbase17:36801] regionserver.HRegionServer(3303): Received CLOSE for 1bb7196a0ec56257d44ee6fb4cf0d1e5 2023-05-31 13:54:58,477 INFO [RS:0;jenkins-hbase17:36801] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-31 13:54:58,478 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1bb7196a0ec56257d44ee6fb4cf0d1e5, disabling compactions & flushes 2023-05-31 13:54:58,478 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 13:54:58,478 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5. 2023-05-31 13:54:58,478 DEBUG [RS:0;jenkins-hbase17:36801] regionserver.HRegionServer(1504): Waiting on 1588230740, 1bb7196a0ec56257d44ee6fb4cf0d1e5 2023-05-31 13:54:58,478 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 13:54:58,478 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 13:54:58,478 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 13:54:58,478 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 13:54:58,478 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5. 2023-05-31 13:54:58,478 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 13:54:58,478 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5. after waiting 0 ms 2023-05-31 13:54:58,478 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5. 2023-05-31 13:54:58,478 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-31 13:54:58,478 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1bb7196a0ec56257d44ee6fb4cf0d1e5: 2023-05-31 13:54:58,478 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1685541251243.1bb7196a0ec56257d44ee6fb4cf0d1e5. 2023-05-31 13:54:58,510 INFO [RS:1;jenkins-hbase17:36457] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,36457,1685541251905; all regions closed. 2023-05-31 13:54:58,511 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905 2023-05-31 13:54:58,648 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541274970 to hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/oldWALs/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541274970 2023-05-31 13:54:58,649 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541279095 to hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/oldWALs/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541279095 2023-05-31 13:54:58,650 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36457,1685541251905/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541279126 to hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/oldWALs/jenkins-hbase17.apache.org%2C36457%2C1685541251905.1685541279126 2023-05-31 13:54:58,653 DEBUG [RS:1;jenkins-hbase17:36457] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/oldWALs 2023-05-31 13:54:58,653 INFO [RS:1;jenkins-hbase17:36457] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase17.apache.org%2C36457%2C1685541251905:(num 1685541298227) 2023-05-31 13:54:58,653 DEBUG [RS:1;jenkins-hbase17:36457] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:54:58,653 INFO [RS:1;jenkins-hbase17:36457] regionserver.LeaseManager(133): Closed leases 2023-05-31 13:54:58,654 INFO [RS:1;jenkins-hbase17:36457] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-31 13:54:58,654 INFO [RS:1;jenkins-hbase17:36457] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-31 13:54:58,654 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 13:54:58,654 INFO [RS:1;jenkins-hbase17:36457] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-31 13:54:58,654 INFO [RS:1;jenkins-hbase17:36457] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-31 13:54:58,655 INFO [RS:1;jenkins-hbase17:36457] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:36457 2023-05-31 13:54:58,658 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): regionserver:36457-0x1008183e0240005, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,36457,1685541251905 2023-05-31 13:54:58,659 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): regionserver:36457-0x1008183e0240005, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 13:54:58,659 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008183e0240001, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,36457,1685541251905 2023-05-31 13:54:58,659 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008183e0240001, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 13:54:58,659 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 13:54:58,660 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,36457,1685541251905] 2023-05-31 13:54:58,660 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,36457,1685541251905; numProcessing=1 2023-05-31 13:54:58,661 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,36457,1685541251905 already deleted, retry=false 2023-05-31 13:54:58,661 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,36457,1685541251905 expired; onlineServers=1 2023-05-31 13:54:58,678 INFO [RS:0;jenkins-hbase17:36801] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-05-31 13:54:58,678 INFO [RS:0;jenkins-hbase17:36801] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,36801,1685541250424; all regions closed. 2023-05-31 13:54:58,678 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36801,1685541250424 2023-05-31 13:54:58,678 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:54:58,679 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/WALs/jenkins-hbase17.apache.org,36801,1685541250424 2023-05-31 13:54:58,684 ERROR [RS:0;jenkins-hbase17:36801] regionserver.HRegionServer(1539): Shutdown / close of WAL failed: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK]] are bad. Aborting... 2023-05-31 13:54:58,684 DEBUG [RS:0;jenkins-hbase17:36801] regionserver.HRegionServer(1540): Shutdown / close exception details: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45811,DS-2446db86-2914-41fd-8f1a-dbbd7fd463cb,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:54:58,684 DEBUG [RS:0;jenkins-hbase17:36801] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:54:58,684 INFO [RS:0;jenkins-hbase17:36801] regionserver.LeaseManager(133): Closed leases 2023-05-31 13:54:58,684 INFO [RS:0;jenkins-hbase17:36801] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-31 13:54:58,684 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 13:54:58,685 INFO [RS:0;jenkins-hbase17:36801] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:36801 2023-05-31 13:54:58,686 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008183e0240001, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,36801,1685541250424 2023-05-31 13:54:58,686 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 13:54:58,687 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,36801,1685541250424] 2023-05-31 13:54:58,687 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,36801,1685541250424; numProcessing=2 2023-05-31 13:54:58,688 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,36801,1685541250424 already deleted, retry=false 2023-05-31 13:54:58,688 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,36801,1685541250424 expired; onlineServers=0 2023-05-31 13:54:58,688 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,33819,1685541250347' ***** 2023-05-31 13:54:58,688 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-31 13:54:58,689 DEBUG [M:0;jenkins-hbase17:33819] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@68195804, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-05-31 13:54:58,689 INFO [M:0;jenkins-hbase17:33819] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,33819,1685541250347 2023-05-31 13:54:58,689 INFO [M:0;jenkins-hbase17:33819] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,33819,1685541250347; all regions closed. 2023-05-31 13:54:58,689 DEBUG [M:0;jenkins-hbase17:33819] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:54:58,689 DEBUG [M:0;jenkins-hbase17:33819] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-31 13:54:58,689 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-31 13:54:58,689 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1685541250704] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1685541250704,5,FailOnTimeoutGroup] 2023-05-31 13:54:58,689 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1685541250714] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1685541250714,5,FailOnTimeoutGroup] 2023-05-31 13:54:58,689 DEBUG [M:0;jenkins-hbase17:33819] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-31 13:54:58,691 INFO [M:0;jenkins-hbase17:33819] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-31 13:54:58,691 INFO [M:0;jenkins-hbase17:33819] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-31 13:54:58,691 INFO [M:0;jenkins-hbase17:33819] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [] on shutdown 2023-05-31 13:54:58,691 DEBUG [M:0;jenkins-hbase17:33819] master.HMaster(1512): Stopping service threads 2023-05-31 13:54:58,691 INFO [M:0;jenkins-hbase17:33819] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-31 13:54:58,692 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-31 13:54:58,692 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:54:58,692 ERROR [M:0;jenkins-hbase17:33819] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-31 13:54:58,692 INFO [M:0;jenkins-hbase17:33819] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-31 13:54:58,692 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-31 13:54:58,692 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 13:54:58,693 DEBUG [M:0;jenkins-hbase17:33819] zookeeper.ZKUtil(398): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-31 13:54:58,693 WARN [M:0;jenkins-hbase17:33819] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-31 13:54:58,693 INFO [M:0;jenkins-hbase17:33819] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-31 13:54:58,693 INFO [M:0;jenkins-hbase17:33819] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-31 13:54:58,694 DEBUG [M:0;jenkins-hbase17:33819] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 13:54:58,694 INFO [M:0;jenkins-hbase17:33819] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:54:58,694 DEBUG [M:0;jenkins-hbase17:33819] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:54:58,694 DEBUG [M:0;jenkins-hbase17:33819] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 13:54:58,694 DEBUG [M:0;jenkins-hbase17:33819] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:54:58,694 INFO [M:0;jenkins-hbase17:33819] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.12 KB heapSize=45.77 KB 2023-05-31 13:54:58,702 WARN [Thread-752] hdfs.DataStreamer(1658): Abandoning BP-1953963714-136.243.18.41-1685541249766:blk_1073741870_1052 2023-05-31 13:54:58,703 WARN [Thread-752] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41171,DS-d97b2366-cf0f-438d-8abf-56bda5743477,DISK] 2023-05-31 13:54:58,709 INFO [M:0;jenkins-hbase17:33819] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.12 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/0240735311f84313a4e9956b00832322 2023-05-31 13:54:58,713 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@140b787] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40977, datanodeUuid=b3cbc322-f99d-488f-af11-788f393b0fa3, infoPort=36397, infoSecurePort=0, ipcPort=36633, storageInfo=lv=-57;cid=testClusterID;nsid=777904125;c=1685541249766):Failed to transfer BP-1953963714-136.243.18.41-1685541249766:blk_1073741825_1001 to 127.0.0.1:41171 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:58,713 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@649ae4ad] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40977, datanodeUuid=b3cbc322-f99d-488f-af11-788f393b0fa3, infoPort=36397, infoSecurePort=0, ipcPort=36633, storageInfo=lv=-57;cid=testClusterID;nsid=777904125;c=1685541249766):Failed to transfer BP-1953963714-136.243.18.41-1685541249766:blk_1073741836_1012 to 127.0.0.1:41171 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:54:58,715 DEBUG [M:0;jenkins-hbase17:33819] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/0240735311f84313a4e9956b00832322 as hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/0240735311f84313a4e9956b00832322 2023-05-31 13:54:58,722 INFO [M:0;jenkins-hbase17:33819] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34425/user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/0240735311f84313a4e9956b00832322, entries=11, sequenceid=92, filesize=7.0 K 2023-05-31 13:54:58,723 INFO [M:0;jenkins-hbase17:33819] regionserver.HRegion(2948): Finished flush of dataSize ~38.12 KB/39035, heapSize ~45.75 KB/46848, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 29ms, sequenceid=92, compaction requested=false 2023-05-31 13:54:58,724 INFO [M:0;jenkins-hbase17:33819] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:54:58,725 DEBUG [M:0;jenkins-hbase17:33819] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 13:54:58,725 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/4c3e78c1-b91a-31f5-fdae-914f59bd11e9/MasterData/WALs/jenkins-hbase17.apache.org,33819,1685541250347 2023-05-31 13:54:58,728 INFO [M:0;jenkins-hbase17:33819] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-31 13:54:58,728 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 13:54:58,728 INFO [M:0;jenkins-hbase17:33819] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:33819 2023-05-31 13:54:58,730 DEBUG [M:0;jenkins-hbase17:33819] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,33819,1685541250347 already deleted, retry=false 2023-05-31 13:54:58,773 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): regionserver:36457-0x1008183e0240005, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:54:58,773 INFO [RS:1;jenkins-hbase17:36457] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,36457,1685541251905; zookeeper connection closed. 2023-05-31 13:54:58,773 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): regionserver:36457-0x1008183e0240005, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:54:58,774 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7f7a605b] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7f7a605b 2023-05-31 13:54:58,838 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-31 13:54:58,873 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:54:58,874 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): master:33819-0x1008183e0240000, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:54:58,873 INFO [M:0;jenkins-hbase17:33819] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,33819,1685541250347; zookeeper connection closed. 2023-05-31 13:54:58,974 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008183e0240001, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:54:58,974 DEBUG [Listener at localhost.localdomain/37517-EventThread] zookeeper.ZKWatcher(600): regionserver:36801-0x1008183e0240001, quorum=127.0.0.1:57632, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:54:58,974 INFO [RS:0;jenkins-hbase17:36801] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,36801,1685541250424; zookeeper connection closed. 2023-05-31 13:54:58,975 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7df3956f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7df3956f 2023-05-31 13:54:58,975 INFO [Listener at localhost.localdomain/36633] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 2 regionserver(s) complete 2023-05-31 13:54:58,975 WARN [Listener at localhost.localdomain/36633] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 13:54:58,979 INFO [Listener at localhost.localdomain/36633] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 13:54:59,084 WARN [BP-1953963714-136.243.18.41-1685541249766 heartbeating to localhost.localdomain/127.0.0.1:34425] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 13:54:59,084 WARN [BP-1953963714-136.243.18.41-1685541249766 heartbeating to localhost.localdomain/127.0.0.1:34425] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1953963714-136.243.18.41-1685541249766 (Datanode Uuid b3cbc322-f99d-488f-af11-788f393b0fa3) service to localhost.localdomain/127.0.0.1:34425 2023-05-31 13:54:59,085 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/dfs/data/data3/current/BP-1953963714-136.243.18.41-1685541249766] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:54:59,086 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/dfs/data/data4/current/BP-1953963714-136.243.18.41-1685541249766] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:54:59,088 WARN [Listener at localhost.localdomain/36633] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 13:54:59,091 INFO [Listener at localhost.localdomain/36633] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 13:54:59,194 WARN [BP-1953963714-136.243.18.41-1685541249766 heartbeating to localhost.localdomain/127.0.0.1:34425] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 13:54:59,195 WARN [BP-1953963714-136.243.18.41-1685541249766 heartbeating to localhost.localdomain/127.0.0.1:34425] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1953963714-136.243.18.41-1685541249766 (Datanode Uuid 7e3baea3-6eb8-4435-b6a4-b108a09a0899) service to localhost.localdomain/127.0.0.1:34425 2023-05-31 13:54:59,195 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/dfs/data/data9/current/BP-1953963714-136.243.18.41-1685541249766] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:54:59,196 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/cluster_a52f4491-ae48-6384-c377-bbc1d696a261/dfs/data/data10/current/BP-1953963714-136.243.18.41-1685541249766] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:54:59,211 INFO [Listener at localhost.localdomain/36633] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-31 13:54:59,331 INFO [Listener at localhost.localdomain/36633] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-31 13:54:59,376 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-31 13:54:59,386 INFO [Listener at localhost.localdomain/36633] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=78 (was 52) Potentially hanging thread: RS-EventLoopGroup-6-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:34425 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-3-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: Listener at localhost.localdomain/36633 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (550425889) connection to localhost.localdomain/127.0.0.1:34425 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (550425889) connection to localhost.localdomain/127.0.0.1:34425 from jenkins.hfs.1 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (550425889) connection to localhost.localdomain/127.0.0.1:34425 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-17-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Abort regionserver monitor java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: nioEventLoopGroup-15-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.2@localhost.localdomain:34425 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (550425889) connection to localhost.localdomain/127.0.0.1:34425 from jenkins.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-16-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-3-worker-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.1@localhost.localdomain:34425 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=460 (was 439) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=180 (was 262), ProcessCount=171 (was 170) - ProcessCount LEAK? -, AvailableMemoryMB=7546 (was 8298) 2023-05-31 13:54:59,395 INFO [Listener at localhost.localdomain/36633] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=78, OpenFileDescriptor=460, MaxFileDescriptor=60000, SystemLoadAverage=180, ProcessCount=171, AvailableMemoryMB=7546 2023-05-31 13:54:59,396 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-31 13:54:59,396 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/hadoop.log.dir so I do NOT create it in target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735 2023-05-31 13:54:59,396 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b341db00-cb2f-ba39-b2e6-ca948278547f/hadoop.tmp.dir so I do NOT create it in target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735 2023-05-31 13:54:59,396 INFO [Listener at localhost.localdomain/36633] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/cluster_dfe169aa-93d3-9a1b-fc41-fecea5804fe5, deleteOnExit=true 2023-05-31 13:54:59,396 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-31 13:54:59,397 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/test.cache.data in system properties and HBase conf 2023-05-31 13:54:59,397 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/hadoop.tmp.dir in system properties and HBase conf 2023-05-31 13:54:59,397 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/hadoop.log.dir in system properties and HBase conf 2023-05-31 13:54:59,397 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-31 13:54:59,397 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-31 13:54:59,397 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-31 13:54:59,398 DEBUG [Listener at localhost.localdomain/36633] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-31 13:54:59,398 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-31 13:54:59,398 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-31 13:54:59,398 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-31 13:54:59,398 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 13:54:59,399 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-31 13:54:59,399 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-31 13:54:59,399 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 13:54:59,399 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 13:54:59,399 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-31 13:54:59,399 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/nfs.dump.dir in system properties and HBase conf 2023-05-31 13:54:59,399 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/java.io.tmpdir in system properties and HBase conf 2023-05-31 13:54:59,399 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 13:54:59,400 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-31 13:54:59,400 INFO [Listener at localhost.localdomain/36633] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-31 13:54:59,401 WARN [Listener at localhost.localdomain/36633] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 13:54:59,403 WARN [Listener at localhost.localdomain/36633] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 13:54:59,403 WARN [Listener at localhost.localdomain/36633] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 13:54:59,428 WARN [Listener at localhost.localdomain/36633] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:54:59,430 INFO [Listener at localhost.localdomain/36633] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:54:59,441 INFO [Listener at localhost.localdomain/36633] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/java.io.tmpdir/Jetty_localhost_localdomain_40703_hdfs____wkq4wo/webapp 2023-05-31 13:54:59,514 INFO [Listener at localhost.localdomain/36633] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:40703 2023-05-31 13:54:59,516 WARN [Listener at localhost.localdomain/36633] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 13:54:59,517 WARN [Listener at localhost.localdomain/36633] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 13:54:59,517 WARN [Listener at localhost.localdomain/36633] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 13:54:59,545 WARN [Listener at localhost.localdomain/34707] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:54:59,556 WARN [Listener at localhost.localdomain/34707] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 13:54:59,559 WARN [Listener at localhost.localdomain/34707] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:54:59,560 INFO [Listener at localhost.localdomain/34707] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:54:59,566 INFO [Listener at localhost.localdomain/34707] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/java.io.tmpdir/Jetty_localhost_35025_datanode____.lesonc/webapp 2023-05-31 13:54:59,637 INFO [Listener at localhost.localdomain/34707] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35025 2023-05-31 13:54:59,644 WARN [Listener at localhost.localdomain/36629] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:54:59,661 WARN [Listener at localhost.localdomain/36629] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 13:54:59,664 WARN [Listener at localhost.localdomain/36629] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:54:59,665 INFO [Listener at localhost.localdomain/36629] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:54:59,669 INFO [Listener at localhost.localdomain/36629] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/java.io.tmpdir/Jetty_localhost_38883_datanode____.tt1s2z/webapp 2023-05-31 13:54:59,711 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbdbc4962b0ba9e4e: Processing first storage report for DS-d15d605a-c411-44e8-bca7-c810ef84d428 from datanode eaca20e2-3460-4ade-ae1c-a8756561d942 2023-05-31 13:54:59,711 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbdbc4962b0ba9e4e: from storage DS-d15d605a-c411-44e8-bca7-c810ef84d428 node DatanodeRegistration(127.0.0.1:45107, datanodeUuid=eaca20e2-3460-4ade-ae1c-a8756561d942, infoPort=45905, infoSecurePort=0, ipcPort=36629, storageInfo=lv=-57;cid=testClusterID;nsid=432681125;c=1685541299404), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:54:59,711 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbdbc4962b0ba9e4e: Processing first storage report for DS-7ebe5b01-f189-43d7-ade4-51834b8b387b from datanode eaca20e2-3460-4ade-ae1c-a8756561d942 2023-05-31 13:54:59,711 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbdbc4962b0ba9e4e: from storage DS-7ebe5b01-f189-43d7-ade4-51834b8b387b node DatanodeRegistration(127.0.0.1:45107, datanodeUuid=eaca20e2-3460-4ade-ae1c-a8756561d942, infoPort=45905, infoSecurePort=0, ipcPort=36629, storageInfo=lv=-57;cid=testClusterID;nsid=432681125;c=1685541299404), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:54:59,748 INFO [Listener at localhost.localdomain/36629] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38883 2023-05-31 13:54:59,755 WARN [Listener at localhost.localdomain/35725] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:54:59,806 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8e51f0c94ac78ca0: Processing first storage report for DS-3b23d7cb-6456-4c30-9119-9abae75de53e from datanode 2affb1fe-d1e0-47db-8472-784d08639c5e 2023-05-31 13:54:59,806 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8e51f0c94ac78ca0: from storage DS-3b23d7cb-6456-4c30-9119-9abae75de53e node DatanodeRegistration(127.0.0.1:33463, datanodeUuid=2affb1fe-d1e0-47db-8472-784d08639c5e, infoPort=35047, infoSecurePort=0, ipcPort=35725, storageInfo=lv=-57;cid=testClusterID;nsid=432681125;c=1685541299404), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:54:59,807 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8e51f0c94ac78ca0: Processing first storage report for DS-768d311f-97cc-4849-9f8d-9ce13147dd1d from datanode 2affb1fe-d1e0-47db-8472-784d08639c5e 2023-05-31 13:54:59,807 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8e51f0c94ac78ca0: from storage DS-768d311f-97cc-4849-9f8d-9ce13147dd1d node DatanodeRegistration(127.0.0.1:33463, datanodeUuid=2affb1fe-d1e0-47db-8472-784d08639c5e, infoPort=35047, infoSecurePort=0, ipcPort=35725, storageInfo=lv=-57;cid=testClusterID;nsid=432681125;c=1685541299404), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:54:59,865 DEBUG [Listener at localhost.localdomain/35725] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735 2023-05-31 13:54:59,868 INFO [Listener at localhost.localdomain/35725] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/cluster_dfe169aa-93d3-9a1b-fc41-fecea5804fe5/zookeeper_0, clientPort=59404, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/cluster_dfe169aa-93d3-9a1b-fc41-fecea5804fe5/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/cluster_dfe169aa-93d3-9a1b-fc41-fecea5804fe5/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-31 13:54:59,869 INFO [Listener at localhost.localdomain/35725] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=59404 2023-05-31 13:54:59,870 INFO [Listener at localhost.localdomain/35725] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:54:59,871 INFO [Listener at localhost.localdomain/35725] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:54:59,887 INFO [Listener at localhost.localdomain/35725] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b with version=8 2023-05-31 13:54:59,887 INFO [Listener at localhost.localdomain/35725] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/hbase-staging 2023-05-31 13:54:59,889 INFO [Listener at localhost.localdomain/35725] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-05-31 13:54:59,889 INFO [Listener at localhost.localdomain/35725] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:54:59,889 INFO [Listener at localhost.localdomain/35725] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 13:54:59,889 INFO [Listener at localhost.localdomain/35725] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 13:54:59,889 INFO [Listener at localhost.localdomain/35725] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:54:59,889 INFO [Listener at localhost.localdomain/35725] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 13:54:59,889 INFO [Listener at localhost.localdomain/35725] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 13:54:59,891 INFO [Listener at localhost.localdomain/35725] ipc.NettyRpcServer(120): Bind to /136.243.18.41:38553 2023-05-31 13:54:59,892 INFO [Listener at localhost.localdomain/35725] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:54:59,892 INFO [Listener at localhost.localdomain/35725] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:54:59,893 INFO [Listener at localhost.localdomain/35725] zookeeper.RecoverableZooKeeper(93): Process identifier=master:38553 connecting to ZooKeeper ensemble=127.0.0.1:59404 2023-05-31 13:54:59,900 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:385530x0, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 13:54:59,901 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:38553-0x1008184a1ba0000 connected 2023-05-31 13:54:59,918 DEBUG [Listener at localhost.localdomain/35725] zookeeper.ZKUtil(164): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 13:54:59,919 DEBUG [Listener at localhost.localdomain/35725] zookeeper.ZKUtil(164): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:54:59,919 DEBUG [Listener at localhost.localdomain/35725] zookeeper.ZKUtil(164): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 13:54:59,920 DEBUG [Listener at localhost.localdomain/35725] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38553 2023-05-31 13:54:59,921 DEBUG [Listener at localhost.localdomain/35725] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38553 2023-05-31 13:54:59,924 DEBUG [Listener at localhost.localdomain/35725] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38553 2023-05-31 13:54:59,927 DEBUG [Listener at localhost.localdomain/35725] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38553 2023-05-31 13:54:59,927 DEBUG [Listener at localhost.localdomain/35725] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38553 2023-05-31 13:54:59,927 INFO [Listener at localhost.localdomain/35725] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b, hbase.cluster.distributed=false 2023-05-31 13:54:59,940 INFO [Listener at localhost.localdomain/35725] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-05-31 13:54:59,940 INFO [Listener at localhost.localdomain/35725] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:54:59,940 INFO [Listener at localhost.localdomain/35725] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 13:54:59,940 INFO [Listener at localhost.localdomain/35725] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 13:54:59,940 INFO [Listener at localhost.localdomain/35725] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:54:59,940 INFO [Listener at localhost.localdomain/35725] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 13:54:59,940 INFO [Listener at localhost.localdomain/35725] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 13:54:59,942 INFO [Listener at localhost.localdomain/35725] ipc.NettyRpcServer(120): Bind to /136.243.18.41:43095 2023-05-31 13:54:59,942 INFO [Listener at localhost.localdomain/35725] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-31 13:54:59,943 DEBUG [Listener at localhost.localdomain/35725] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-31 13:54:59,944 INFO [Listener at localhost.localdomain/35725] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:54:59,945 INFO [Listener at localhost.localdomain/35725] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:54:59,946 INFO [Listener at localhost.localdomain/35725] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43095 connecting to ZooKeeper ensemble=127.0.0.1:59404 2023-05-31 13:54:59,949 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): regionserver:430950x0, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 13:54:59,951 DEBUG [Listener at localhost.localdomain/35725] zookeeper.ZKUtil(164): regionserver:430950x0, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 13:54:59,951 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43095-0x1008184a1ba0001 connected 2023-05-31 13:54:59,952 DEBUG [Listener at localhost.localdomain/35725] zookeeper.ZKUtil(164): regionserver:43095-0x1008184a1ba0001, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:54:59,952 DEBUG [Listener at localhost.localdomain/35725] zookeeper.ZKUtil(164): regionserver:43095-0x1008184a1ba0001, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 13:54:59,953 DEBUG [Listener at localhost.localdomain/35725] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43095 2023-05-31 13:54:59,953 DEBUG [Listener at localhost.localdomain/35725] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43095 2023-05-31 13:54:59,953 DEBUG [Listener at localhost.localdomain/35725] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43095 2023-05-31 13:54:59,954 DEBUG [Listener at localhost.localdomain/35725] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43095 2023-05-31 13:54:59,954 DEBUG [Listener at localhost.localdomain/35725] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43095 2023-05-31 13:54:59,955 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,38553,1685541299888 2023-05-31 13:54:59,956 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 13:54:59,957 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,38553,1685541299888 2023-05-31 13:54:59,958 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 13:54:59,958 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): regionserver:43095-0x1008184a1ba0001, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 13:54:59,958 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:54:59,959 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 13:54:59,960 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,38553,1685541299888 from backup master directory 2023-05-31 13:54:59,960 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 13:54:59,961 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,38553,1685541299888 2023-05-31 13:54:59,961 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 13:54:59,961 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 13:54:59,961 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,38553,1685541299888 2023-05-31 13:54:59,965 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-31 13:54:59,975 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/hbase.id with ID: 6ed5110c-fe6a-4851-a818-94d4b2a9bcaf 2023-05-31 13:54:59,986 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:54:59,989 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:54:59,997 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x3203134c to 127.0.0.1:59404 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 13:55:00,000 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@d2d3bb6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 13:55:00,000 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 13:55:00,000 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-31 13:55:00,001 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 13:55:00,002 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/MasterData/data/master/store-tmp 2023-05-31 13:55:00,017 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:55:00,017 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 13:55:00,017 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:55:00,017 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:55:00,017 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 13:55:00,018 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:55:00,018 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:55:00,018 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 13:55:00,018 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/MasterData/WALs/jenkins-hbase17.apache.org,38553,1685541299888 2023-05-31 13:55:00,021 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C38553%2C1685541299888, suffix=, logDir=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/MasterData/WALs/jenkins-hbase17.apache.org,38553,1685541299888, archiveDir=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/MasterData/oldWALs, maxLogs=10 2023-05-31 13:55:00,034 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/MasterData/WALs/jenkins-hbase17.apache.org,38553,1685541299888/jenkins-hbase17.apache.org%2C38553%2C1685541299888.1685541300022 2023-05-31 13:55:00,034 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33463,DS-3b23d7cb-6456-4c30-9119-9abae75de53e,DISK], DatanodeInfoWithStorage[127.0.0.1:45107,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK]] 2023-05-31 13:55:00,034 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-31 13:55:00,034 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:55:00,034 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:55:00,034 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:55:00,037 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:55:00,039 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-31 13:55:00,040 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-31 13:55:00,041 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:55:00,042 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:55:00,042 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:55:00,045 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:55:00,048 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 13:55:00,048 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=849989, jitterRate=0.08081763982772827}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 13:55:00,049 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 13:55:00,049 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-31 13:55:00,050 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-31 13:55:00,050 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-31 13:55:00,051 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-31 13:55:00,051 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-31 13:55:00,052 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-31 13:55:00,052 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-31 13:55:00,053 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-31 13:55:00,054 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-31 13:55:00,064 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-31 13:55:00,065 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-31 13:55:00,066 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-31 13:55:00,066 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-31 13:55:00,066 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-31 13:55:00,069 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:55:00,070 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-31 13:55:00,070 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-31 13:55:00,071 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-31 13:55:00,072 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): regionserver:43095-0x1008184a1ba0001, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 13:55:00,072 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 13:55:00,072 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:55:00,072 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,38553,1685541299888, sessionid=0x1008184a1ba0000, setting cluster-up flag (Was=false) 2023-05-31 13:55:00,075 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:55:00,079 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-31 13:55:00,080 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,38553,1685541299888 2023-05-31 13:55:00,083 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:55:00,086 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-31 13:55:00,087 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,38553,1685541299888 2023-05-31 13:55:00,087 WARN [master/jenkins-hbase17:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/.hbase-snapshot/.tmp 2023-05-31 13:55:00,094 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-31 13:55:00,095 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-05-31 13:55:00,095 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-05-31 13:55:00,095 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-05-31 13:55:00,095 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-05-31 13:55:00,095 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-05-31 13:55:00,095 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:55:00,095 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-05-31 13:55:00,096 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:55:00,101 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685541330101 2023-05-31 13:55:00,102 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-31 13:55:00,102 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-31 13:55:00,103 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-31 13:55:00,103 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-31 13:55:00,103 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-31 13:55:00,103 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-31 13:55:00,103 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:00,103 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 13:55:00,104 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-31 13:55:00,104 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-31 13:55:00,105 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-31 13:55:00,105 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-31 13:55:00,106 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 13:55:00,109 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-31 13:55:00,109 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-31 13:55:00,109 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1685541300109,5,FailOnTimeoutGroup] 2023-05-31 13:55:00,109 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1685541300109,5,FailOnTimeoutGroup] 2023-05-31 13:55:00,109 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:00,109 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-31 13:55:00,109 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:00,109 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:00,120 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 13:55:00,121 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 13:55:00,121 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b 2023-05-31 13:55:00,135 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:55:00,141 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 13:55:00,142 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/hbase/meta/1588230740/info 2023-05-31 13:55:00,143 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 13:55:00,144 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:55:00,144 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 13:55:00,145 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/hbase/meta/1588230740/rep_barrier 2023-05-31 13:55:00,146 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 13:55:00,147 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:55:00,147 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 13:55:00,148 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/hbase/meta/1588230740/table 2023-05-31 13:55:00,149 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 13:55:00,150 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:55:00,151 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/hbase/meta/1588230740 2023-05-31 13:55:00,151 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/hbase/meta/1588230740 2023-05-31 13:55:00,153 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 13:55:00,154 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 13:55:00,156 INFO [RS:0;jenkins-hbase17:43095] regionserver.HRegionServer(951): ClusterId : 6ed5110c-fe6a-4851-a818-94d4b2a9bcaf 2023-05-31 13:55:00,158 DEBUG [RS:0;jenkins-hbase17:43095] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-31 13:55:00,160 DEBUG [RS:0;jenkins-hbase17:43095] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-31 13:55:00,160 DEBUG [RS:0;jenkins-hbase17:43095] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-31 13:55:00,168 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 13:55:00,169 DEBUG [RS:0;jenkins-hbase17:43095] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-31 13:55:00,170 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=814289, jitterRate=0.03542220592498779}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 13:55:00,170 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 13:55:00,170 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 13:55:00,171 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 13:55:00,171 DEBUG [RS:0;jenkins-hbase17:43095] zookeeper.ReadOnlyZKClient(139): Connect 0x3e1d19ea to 127.0.0.1:59404 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 13:55:00,171 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 13:55:00,171 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 13:55:00,171 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 13:55:00,171 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 13:55:00,171 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 13:55:00,173 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 13:55:00,173 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-31 13:55:00,174 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-31 13:55:00,176 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-31 13:55:00,177 DEBUG [RS:0;jenkins-hbase17:43095] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3a228af1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 13:55:00,178 DEBUG [RS:0;jenkins-hbase17:43095] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7f2013d1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-05-31 13:55:00,178 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-31 13:55:00,185 DEBUG [RS:0;jenkins-hbase17:43095] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:43095 2023-05-31 13:55:00,185 INFO [RS:0;jenkins-hbase17:43095] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-31 13:55:00,185 INFO [RS:0;jenkins-hbase17:43095] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-31 13:55:00,185 DEBUG [RS:0;jenkins-hbase17:43095] regionserver.HRegionServer(1022): About to register with Master. 2023-05-31 13:55:00,186 INFO [RS:0;jenkins-hbase17:43095] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase17.apache.org,38553,1685541299888 with isa=jenkins-hbase17.apache.org/136.243.18.41:43095, startcode=1685541299939 2023-05-31 13:55:00,186 DEBUG [RS:0;jenkins-hbase17:43095] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-31 13:55:00,190 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:50557, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-05-31 13:55:00,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38553] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,43095,1685541299939 2023-05-31 13:55:00,192 DEBUG [RS:0;jenkins-hbase17:43095] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b 2023-05-31 13:55:00,192 DEBUG [RS:0;jenkins-hbase17:43095] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:34707 2023-05-31 13:55:00,192 DEBUG [RS:0;jenkins-hbase17:43095] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-31 13:55:00,194 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 13:55:00,194 DEBUG [RS:0;jenkins-hbase17:43095] zookeeper.ZKUtil(162): regionserver:43095-0x1008184a1ba0001, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43095,1685541299939 2023-05-31 13:55:00,194 WARN [RS:0;jenkins-hbase17:43095] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 13:55:00,195 INFO [RS:0;jenkins-hbase17:43095] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 13:55:00,195 DEBUG [RS:0;jenkins-hbase17:43095] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939 2023-05-31 13:55:00,195 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,43095,1685541299939] 2023-05-31 13:55:00,200 DEBUG [RS:0;jenkins-hbase17:43095] zookeeper.ZKUtil(162): regionserver:43095-0x1008184a1ba0001, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43095,1685541299939 2023-05-31 13:55:00,200 DEBUG [RS:0;jenkins-hbase17:43095] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-31 13:55:00,201 INFO [RS:0;jenkins-hbase17:43095] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-31 13:55:00,202 INFO [RS:0;jenkins-hbase17:43095] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-31 13:55:00,208 INFO [RS:0;jenkins-hbase17:43095] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-31 13:55:00,208 INFO [RS:0;jenkins-hbase17:43095] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:00,208 INFO [RS:0;jenkins-hbase17:43095] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-31 13:55:00,210 INFO [RS:0;jenkins-hbase17:43095] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:00,210 DEBUG [RS:0;jenkins-hbase17:43095] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:55:00,210 DEBUG [RS:0;jenkins-hbase17:43095] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:55:00,210 DEBUG [RS:0;jenkins-hbase17:43095] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:55:00,210 DEBUG [RS:0;jenkins-hbase17:43095] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:55:00,210 DEBUG [RS:0;jenkins-hbase17:43095] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:55:00,210 DEBUG [RS:0;jenkins-hbase17:43095] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-05-31 13:55:00,210 DEBUG [RS:0;jenkins-hbase17:43095] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:55:00,211 DEBUG [RS:0;jenkins-hbase17:43095] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:55:00,211 DEBUG [RS:0;jenkins-hbase17:43095] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:55:00,211 DEBUG [RS:0;jenkins-hbase17:43095] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:55:00,212 INFO [RS:0;jenkins-hbase17:43095] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:00,212 INFO [RS:0;jenkins-hbase17:43095] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:00,212 INFO [RS:0;jenkins-hbase17:43095] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:00,223 INFO [RS:0;jenkins-hbase17:43095] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-31 13:55:00,223 INFO [RS:0;jenkins-hbase17:43095] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43095,1685541299939-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:00,238 INFO [RS:0;jenkins-hbase17:43095] regionserver.Replication(203): jenkins-hbase17.apache.org,43095,1685541299939 started 2023-05-31 13:55:00,238 INFO [RS:0;jenkins-hbase17:43095] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,43095,1685541299939, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:43095, sessionid=0x1008184a1ba0001 2023-05-31 13:55:00,238 DEBUG [RS:0;jenkins-hbase17:43095] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-31 13:55:00,238 DEBUG [RS:0;jenkins-hbase17:43095] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,43095,1685541299939 2023-05-31 13:55:00,238 DEBUG [RS:0;jenkins-hbase17:43095] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,43095,1685541299939' 2023-05-31 13:55:00,238 DEBUG [RS:0;jenkins-hbase17:43095] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 13:55:00,239 DEBUG [RS:0;jenkins-hbase17:43095] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 13:55:00,239 DEBUG [RS:0;jenkins-hbase17:43095] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-31 13:55:00,239 DEBUG [RS:0;jenkins-hbase17:43095] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-31 13:55:00,239 DEBUG [RS:0;jenkins-hbase17:43095] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,43095,1685541299939 2023-05-31 13:55:00,239 DEBUG [RS:0;jenkins-hbase17:43095] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,43095,1685541299939' 2023-05-31 13:55:00,239 DEBUG [RS:0;jenkins-hbase17:43095] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-31 13:55:00,240 DEBUG [RS:0;jenkins-hbase17:43095] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-31 13:55:00,240 DEBUG [RS:0;jenkins-hbase17:43095] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-31 13:55:00,240 INFO [RS:0;jenkins-hbase17:43095] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-31 13:55:00,240 INFO [RS:0;jenkins-hbase17:43095] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-31 13:55:00,328 DEBUG [jenkins-hbase17:38553] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-31 13:55:00,329 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,43095,1685541299939, state=OPENING 2023-05-31 13:55:00,330 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-31 13:55:00,331 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:55:00,332 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,43095,1685541299939}] 2023-05-31 13:55:00,332 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 13:55:00,343 INFO [RS:0;jenkins-hbase17:43095] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C43095%2C1685541299939, suffix=, logDir=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939, archiveDir=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/oldWALs, maxLogs=32 2023-05-31 13:55:00,354 INFO [RS:0;jenkins-hbase17:43095] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541300344 2023-05-31 13:55:00,354 DEBUG [RS:0;jenkins-hbase17:43095] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33463,DS-3b23d7cb-6456-4c30-9119-9abae75de53e,DISK], DatanodeInfoWithStorage[127.0.0.1:45107,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK]] 2023-05-31 13:55:00,487 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,43095,1685541299939 2023-05-31 13:55:00,487 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-31 13:55:00,491 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:36854, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-31 13:55:00,495 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-31 13:55:00,495 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 13:55:00,498 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C43095%2C1685541299939.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939, archiveDir=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/oldWALs, maxLogs=32 2023-05-31 13:55:00,511 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.meta.1685541300499.meta 2023-05-31 13:55:00,511 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33463,DS-3b23d7cb-6456-4c30-9119-9abae75de53e,DISK], DatanodeInfoWithStorage[127.0.0.1:45107,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK]] 2023-05-31 13:55:00,511 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-31 13:55:00,511 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-31 13:55:00,511 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-31 13:55:00,512 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-31 13:55:00,512 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-31 13:55:00,513 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:55:00,513 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-31 13:55:00,513 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-31 13:55:00,515 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 13:55:00,516 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/hbase/meta/1588230740/info 2023-05-31 13:55:00,516 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/hbase/meta/1588230740/info 2023-05-31 13:55:00,517 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 13:55:00,517 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:55:00,518 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 13:55:00,519 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/hbase/meta/1588230740/rep_barrier 2023-05-31 13:55:00,519 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/hbase/meta/1588230740/rep_barrier 2023-05-31 13:55:00,519 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 13:55:00,520 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:55:00,520 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 13:55:00,521 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/hbase/meta/1588230740/table 2023-05-31 13:55:00,521 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/hbase/meta/1588230740/table 2023-05-31 13:55:00,521 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 13:55:00,522 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:55:00,523 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/hbase/meta/1588230740 2023-05-31 13:55:00,524 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/hbase/meta/1588230740 2023-05-31 13:55:00,526 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 13:55:00,527 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 13:55:00,529 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=804356, jitterRate=0.02279190719127655}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 13:55:00,529 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 13:55:00,531 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685541300487 2023-05-31 13:55:00,537 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-31 13:55:00,538 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-31 13:55:00,539 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,43095,1685541299939, state=OPEN 2023-05-31 13:55:00,541 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-31 13:55:00,541 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 13:55:00,544 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-31 13:55:00,544 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,43095,1685541299939 in 209 msec 2023-05-31 13:55:00,546 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-31 13:55:00,546 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 370 msec 2023-05-31 13:55:00,549 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 456 msec 2023-05-31 13:55:00,549 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685541300549, completionTime=-1 2023-05-31 13:55:00,549 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-31 13:55:00,549 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-31 13:55:00,553 DEBUG [hconnection-0x2b3506c8-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 13:55:00,556 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:36870, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 13:55:00,558 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-31 13:55:00,558 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685541360558 2023-05-31 13:55:00,558 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685541420558 2023-05-31 13:55:00,558 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 8 msec 2023-05-31 13:55:00,563 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38553,1685541299888-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:00,564 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38553,1685541299888-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:00,564 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38553,1685541299888-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:00,564 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:38553, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:00,564 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:00,564 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-31 13:55:00,564 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 13:55:00,566 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-31 13:55:00,566 DEBUG [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-31 13:55:00,569 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 13:55:00,570 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 13:55:00,572 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/.tmp/data/hbase/namespace/dade17a083cec2951119ec5f9a7fe315 2023-05-31 13:55:00,573 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/.tmp/data/hbase/namespace/dade17a083cec2951119ec5f9a7fe315 empty. 2023-05-31 13:55:00,574 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/.tmp/data/hbase/namespace/dade17a083cec2951119ec5f9a7fe315 2023-05-31 13:55:00,574 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-31 13:55:00,596 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-31 13:55:00,598 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => dade17a083cec2951119ec5f9a7fe315, NAME => 'hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/.tmp 2023-05-31 13:55:00,608 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:55:00,608 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing dade17a083cec2951119ec5f9a7fe315, disabling compactions & flushes 2023-05-31 13:55:00,608 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315. 2023-05-31 13:55:00,608 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315. 2023-05-31 13:55:00,608 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315. after waiting 0 ms 2023-05-31 13:55:00,608 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315. 2023-05-31 13:55:00,608 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315. 2023-05-31 13:55:00,608 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for dade17a083cec2951119ec5f9a7fe315: 2023-05-31 13:55:00,611 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 13:55:00,612 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685541300612"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685541300612"}]},"ts":"1685541300612"} 2023-05-31 13:55:00,616 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 13:55:00,617 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 13:55:00,618 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685541300617"}]},"ts":"1685541300617"} 2023-05-31 13:55:00,619 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-31 13:55:00,623 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=dade17a083cec2951119ec5f9a7fe315, ASSIGN}] 2023-05-31 13:55:00,626 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=dade17a083cec2951119ec5f9a7fe315, ASSIGN 2023-05-31 13:55:00,628 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=dade17a083cec2951119ec5f9a7fe315, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,43095,1685541299939; forceNewPlan=false, retain=false 2023-05-31 13:55:00,779 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=dade17a083cec2951119ec5f9a7fe315, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,43095,1685541299939 2023-05-31 13:55:00,779 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685541300779"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685541300779"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685541300779"}]},"ts":"1685541300779"} 2023-05-31 13:55:00,782 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure dade17a083cec2951119ec5f9a7fe315, server=jenkins-hbase17.apache.org,43095,1685541299939}] 2023-05-31 13:55:00,939 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315. 2023-05-31 13:55:00,939 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dade17a083cec2951119ec5f9a7fe315, NAME => 'hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315.', STARTKEY => '', ENDKEY => ''} 2023-05-31 13:55:00,940 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace dade17a083cec2951119ec5f9a7fe315 2023-05-31 13:55:00,940 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:55:00,940 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for dade17a083cec2951119ec5f9a7fe315 2023-05-31 13:55:00,940 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for dade17a083cec2951119ec5f9a7fe315 2023-05-31 13:55:00,942 INFO [StoreOpener-dade17a083cec2951119ec5f9a7fe315-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region dade17a083cec2951119ec5f9a7fe315 2023-05-31 13:55:00,943 DEBUG [StoreOpener-dade17a083cec2951119ec5f9a7fe315-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/hbase/namespace/dade17a083cec2951119ec5f9a7fe315/info 2023-05-31 13:55:00,943 DEBUG [StoreOpener-dade17a083cec2951119ec5f9a7fe315-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/hbase/namespace/dade17a083cec2951119ec5f9a7fe315/info 2023-05-31 13:55:00,944 INFO [StoreOpener-dade17a083cec2951119ec5f9a7fe315-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dade17a083cec2951119ec5f9a7fe315 columnFamilyName info 2023-05-31 13:55:00,944 INFO [StoreOpener-dade17a083cec2951119ec5f9a7fe315-1] regionserver.HStore(310): Store=dade17a083cec2951119ec5f9a7fe315/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:55:00,945 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/hbase/namespace/dade17a083cec2951119ec5f9a7fe315 2023-05-31 13:55:00,946 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/hbase/namespace/dade17a083cec2951119ec5f9a7fe315 2023-05-31 13:55:00,948 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for dade17a083cec2951119ec5f9a7fe315 2023-05-31 13:55:00,951 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/hbase/namespace/dade17a083cec2951119ec5f9a7fe315/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 13:55:00,951 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened dade17a083cec2951119ec5f9a7fe315; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=842325, jitterRate=0.07107178866863251}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 13:55:00,951 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for dade17a083cec2951119ec5f9a7fe315: 2023-05-31 13:55:00,954 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315., pid=6, masterSystemTime=1685541300935 2023-05-31 13:55:00,956 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315. 2023-05-31 13:55:00,957 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315. 2023-05-31 13:55:00,957 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=dade17a083cec2951119ec5f9a7fe315, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,43095,1685541299939 2023-05-31 13:55:00,958 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685541300957"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685541300957"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685541300957"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685541300957"}]},"ts":"1685541300957"} 2023-05-31 13:55:00,962 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-31 13:55:00,963 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure dade17a083cec2951119ec5f9a7fe315, server=jenkins-hbase17.apache.org,43095,1685541299939 in 177 msec 2023-05-31 13:55:00,965 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-31 13:55:00,966 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=dade17a083cec2951119ec5f9a7fe315, ASSIGN in 339 msec 2023-05-31 13:55:00,966 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 13:55:00,966 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685541300966"}]},"ts":"1685541300966"} 2023-05-31 13:55:00,968 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-31 13:55:00,969 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-31 13:55:00,970 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 13:55:00,970 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-31 13:55:00,970 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:55:00,972 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 405 msec 2023-05-31 13:55:00,975 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-31 13:55:00,987 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 13:55:00,991 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 16 msec 2023-05-31 13:55:00,997 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-31 13:55:01,005 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 13:55:01,009 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-05-31 13:55:01,022 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-31 13:55:01,023 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-31 13:55:01,023 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.062sec 2023-05-31 13:55:01,023 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-31 13:55:01,024 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-31 13:55:01,024 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-31 13:55:01,024 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38553,1685541299888-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-31 13:55:01,024 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38553,1685541299888-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-31 13:55:01,027 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-31 13:55:01,056 DEBUG [Listener at localhost.localdomain/35725] zookeeper.ReadOnlyZKClient(139): Connect 0x42836500 to 127.0.0.1:59404 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 13:55:01,061 DEBUG [Listener at localhost.localdomain/35725] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7bb66832, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 13:55:01,063 DEBUG [hconnection-0x62397fed-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 13:55:01,066 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:36876, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 13:55:01,068 INFO [Listener at localhost.localdomain/35725] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase17.apache.org,38553,1685541299888 2023-05-31 13:55:01,068 INFO [Listener at localhost.localdomain/35725] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:55:01,071 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-31 13:55:01,071 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:55:01,072 INFO [Listener at localhost.localdomain/35725] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-31 13:55:01,072 INFO [Listener at localhost.localdomain/35725] wal.TestLogRolling(429): Starting testLogRollOnPipelineRestart 2023-05-31 13:55:01,072 INFO [Listener at localhost.localdomain/35725] wal.TestLogRolling(432): Replication=2 2023-05-31 13:55:01,073 DEBUG [Listener at localhost.localdomain/35725] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-31 13:55:01,077 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:35512, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-31 13:55:01,078 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38553] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-31 13:55:01,079 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38553] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-31 13:55:01,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38553] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 13:55:01,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38553] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart 2023-05-31 13:55:01,084 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 13:55:01,084 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38553] master.MasterRpcServices(697): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnPipelineRestart" procId is: 9 2023-05-31 13:55:01,085 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 13:55:01,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38553] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 13:55:01,087 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/43c99e42b7542e3674682dc0fda4052f 2023-05-31 13:55:01,088 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/43c99e42b7542e3674682dc0fda4052f empty. 2023-05-31 13:55:01,088 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/43c99e42b7542e3674682dc0fda4052f 2023-05-31 13:55:01,088 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnPipelineRestart regions 2023-05-31 13:55:01,103 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/.tabledesc/.tableinfo.0000000001 2023-05-31 13:55:01,105 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(7675): creating {ENCODED => 43c99e42b7542e3674682dc0fda4052f, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/.tmp 2023-05-31 13:55:01,112 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:55:01,112 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1604): Closing 43c99e42b7542e3674682dc0fda4052f, disabling compactions & flushes 2023-05-31 13:55:01,112 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. 2023-05-31 13:55:01,113 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. 2023-05-31 13:55:01,113 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. after waiting 0 ms 2023-05-31 13:55:01,113 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. 2023-05-31 13:55:01,113 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. 2023-05-31 13:55:01,113 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1558): Region close journal for 43c99e42b7542e3674682dc0fda4052f: 2023-05-31 13:55:01,116 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 13:55:01,117 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685541301116"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685541301116"}]},"ts":"1685541301116"} 2023-05-31 13:55:01,118 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 13:55:01,120 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 13:55:01,120 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685541301120"}]},"ts":"1685541301120"} 2023-05-31 13:55:01,122 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLING in hbase:meta 2023-05-31 13:55:01,125 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=43c99e42b7542e3674682dc0fda4052f, ASSIGN}] 2023-05-31 13:55:01,127 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=43c99e42b7542e3674682dc0fda4052f, ASSIGN 2023-05-31 13:55:01,128 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=43c99e42b7542e3674682dc0fda4052f, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,43095,1685541299939; forceNewPlan=false, retain=false 2023-05-31 13:55:01,279 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=43c99e42b7542e3674682dc0fda4052f, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,43095,1685541299939 2023-05-31 13:55:01,279 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685541301279"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685541301279"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685541301279"}]},"ts":"1685541301279"} 2023-05-31 13:55:01,282 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 43c99e42b7542e3674682dc0fda4052f, server=jenkins-hbase17.apache.org,43095,1685541299939}] 2023-05-31 13:55:01,440 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. 2023-05-31 13:55:01,440 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 43c99e42b7542e3674682dc0fda4052f, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f.', STARTKEY => '', ENDKEY => ''} 2023-05-31 13:55:01,440 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnPipelineRestart 43c99e42b7542e3674682dc0fda4052f 2023-05-31 13:55:01,440 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:55:01,440 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 43c99e42b7542e3674682dc0fda4052f 2023-05-31 13:55:01,440 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 43c99e42b7542e3674682dc0fda4052f 2023-05-31 13:55:01,442 INFO [StoreOpener-43c99e42b7542e3674682dc0fda4052f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 43c99e42b7542e3674682dc0fda4052f 2023-05-31 13:55:01,444 DEBUG [StoreOpener-43c99e42b7542e3674682dc0fda4052f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/default/TestLogRolling-testLogRollOnPipelineRestart/43c99e42b7542e3674682dc0fda4052f/info 2023-05-31 13:55:01,444 DEBUG [StoreOpener-43c99e42b7542e3674682dc0fda4052f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/default/TestLogRolling-testLogRollOnPipelineRestart/43c99e42b7542e3674682dc0fda4052f/info 2023-05-31 13:55:01,444 INFO [StoreOpener-43c99e42b7542e3674682dc0fda4052f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 43c99e42b7542e3674682dc0fda4052f columnFamilyName info 2023-05-31 13:55:01,445 INFO [StoreOpener-43c99e42b7542e3674682dc0fda4052f-1] regionserver.HStore(310): Store=43c99e42b7542e3674682dc0fda4052f/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:55:01,446 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/default/TestLogRolling-testLogRollOnPipelineRestart/43c99e42b7542e3674682dc0fda4052f 2023-05-31 13:55:01,446 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/default/TestLogRolling-testLogRollOnPipelineRestart/43c99e42b7542e3674682dc0fda4052f 2023-05-31 13:55:01,449 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 43c99e42b7542e3674682dc0fda4052f 2023-05-31 13:55:01,452 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/data/default/TestLogRolling-testLogRollOnPipelineRestart/43c99e42b7542e3674682dc0fda4052f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 13:55:01,452 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 43c99e42b7542e3674682dc0fda4052f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=743069, jitterRate=-0.05513903498649597}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 13:55:01,452 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 43c99e42b7542e3674682dc0fda4052f: 2023-05-31 13:55:01,453 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f., pid=11, masterSystemTime=1685541301435 2023-05-31 13:55:01,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. 2023-05-31 13:55:01,456 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. 2023-05-31 13:55:01,457 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=43c99e42b7542e3674682dc0fda4052f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,43095,1685541299939 2023-05-31 13:55:01,457 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685541301457"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685541301457"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685541301457"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685541301457"}]},"ts":"1685541301457"} 2023-05-31 13:55:01,462 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-31 13:55:01,462 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 43c99e42b7542e3674682dc0fda4052f, server=jenkins-hbase17.apache.org,43095,1685541299939 in 177 msec 2023-05-31 13:55:01,464 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-31 13:55:01,464 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=43c99e42b7542e3674682dc0fda4052f, ASSIGN in 337 msec 2023-05-31 13:55:01,465 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 13:55:01,465 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685541301465"}]},"ts":"1685541301465"} 2023-05-31 13:55:01,467 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLED in hbase:meta 2023-05-31 13:55:01,469 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 13:55:01,471 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart in 390 msec 2023-05-31 13:55:03,898 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-31 13:55:06,201 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnPipelineRestart' 2023-05-31 13:55:11,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38553] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 13:55:11,087 INFO [Listener at localhost.localdomain/35725] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnPipelineRestart, procId: 9 completed 2023-05-31 13:55:11,090 DEBUG [Listener at localhost.localdomain/35725] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnPipelineRestart 2023-05-31 13:55:11,090 DEBUG [Listener at localhost.localdomain/35725] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. 2023-05-31 13:55:13,097 INFO [Listener at localhost.localdomain/35725] wal.TestLogRolling(469): log.getCurrentFileName()): hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541300344 2023-05-31 13:55:13,097 WARN [Listener at localhost.localdomain/35725] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 13:55:13,099 WARN [ResponseProcessor for block BP-585026492-136.243.18.41-1685541299404:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-585026492-136.243.18.41-1685541299404:blk_1073741833_1009 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 13:55:13,099 WARN [DataStreamer for file /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.meta.1685541300499.meta block BP-585026492-136.243.18.41-1685541299404:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-585026492-136.243.18.41-1685541299404:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:33463,DS-3b23d7cb-6456-4c30-9119-9abae75de53e,DISK], DatanodeInfoWithStorage[127.0.0.1:45107,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:33463,DS-3b23d7cb-6456-4c30-9119-9abae75de53e,DISK]) is bad. 2023-05-31 13:55:13,100 WARN [ResponseProcessor for block BP-585026492-136.243.18.41-1685541299404:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-585026492-136.243.18.41-1685541299404:blk_1073741829_1005 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 13:55:13,100 WARN [ResponseProcessor for block BP-585026492-136.243.18.41-1685541299404:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-585026492-136.243.18.41-1685541299404:blk_1073741832_1008 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 13:55:13,101 WARN [DataStreamer for file /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/MasterData/WALs/jenkins-hbase17.apache.org,38553,1685541299888/jenkins-hbase17.apache.org%2C38553%2C1685541299888.1685541300022 block BP-585026492-136.243.18.41-1685541299404:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-585026492-136.243.18.41-1685541299404:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:33463,DS-3b23d7cb-6456-4c30-9119-9abae75de53e,DISK], DatanodeInfoWithStorage[127.0.0.1:45107,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:33463,DS-3b23d7cb-6456-4c30-9119-9abae75de53e,DISK]) is bad. 2023-05-31 13:55:13,102 WARN [DataStreamer for file /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541300344 block BP-585026492-136.243.18.41-1685541299404:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-585026492-136.243.18.41-1685541299404:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:33463,DS-3b23d7cb-6456-4c30-9119-9abae75de53e,DISK], DatanodeInfoWithStorage[127.0.0.1:45107,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:33463,DS-3b23d7cb-6456-4c30-9119-9abae75de53e,DISK]) is bad. 2023-05-31 13:55:13,106 INFO [Listener at localhost.localdomain/35725] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 13:55:13,109 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-895067350_17 at /127.0.0.1:60140 [Receiving block BP-585026492-136.243.18.41-1685541299404:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:45107:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:60140 dst: /127.0.0.1:45107 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:45107 remote=/127.0.0.1:60140]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:55:13,109 WARN [PacketResponder: BP-585026492-136.243.18.41-1685541299404:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:45107]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:55:13,109 WARN [PacketResponder: BP-585026492-136.243.18.41-1685541299404:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:45107]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:55:13,109 WARN [PacketResponder: BP-585026492-136.243.18.41-1685541299404:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:45107]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:55:13,110 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1207258311_17 at /127.0.0.1:53658 [Receiving block BP-585026492-136.243.18.41-1685541299404:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:33463:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:53658 dst: /127.0.0.1:33463 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:55:13,109 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1207258311_17 at /127.0.0.1:60174 [Receiving block BP-585026492-136.243.18.41-1685541299404:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:45107:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:60174 dst: /127.0.0.1:45107 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:45107 remote=/127.0.0.1:60174]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:55:13,109 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1207258311_17 at /127.0.0.1:60170 [Receiving block BP-585026492-136.243.18.41-1685541299404:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:45107:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:60170 dst: /127.0.0.1:45107 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:45107 remote=/127.0.0.1:60170]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:55:13,112 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-895067350_17 at /127.0.0.1:53636 [Receiving block BP-585026492-136.243.18.41-1685541299404:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:33463:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:53636 dst: /127.0.0.1:33463 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:55:13,111 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1207258311_17 at /127.0.0.1:53682 [Receiving block BP-585026492-136.243.18.41-1685541299404:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:33463:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:53682 dst: /127.0.0.1:33463 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:55:13,210 WARN [BP-585026492-136.243.18.41-1685541299404 heartbeating to localhost.localdomain/127.0.0.1:34707] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 13:55:13,210 WARN [BP-585026492-136.243.18.41-1685541299404 heartbeating to localhost.localdomain/127.0.0.1:34707] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-585026492-136.243.18.41-1685541299404 (Datanode Uuid 2affb1fe-d1e0-47db-8472-784d08639c5e) service to localhost.localdomain/127.0.0.1:34707 2023-05-31 13:55:13,211 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/cluster_dfe169aa-93d3-9a1b-fc41-fecea5804fe5/dfs/data/data3/current/BP-585026492-136.243.18.41-1685541299404] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:55:13,211 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/cluster_dfe169aa-93d3-9a1b-fc41-fecea5804fe5/dfs/data/data4/current/BP-585026492-136.243.18.41-1685541299404] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:55:13,218 WARN [Listener at localhost.localdomain/35725] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 13:55:13,221 WARN [Listener at localhost.localdomain/35725] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:55:13,222 INFO [Listener at localhost.localdomain/35725] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:55:13,235 INFO [Listener at localhost.localdomain/35725] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/java.io.tmpdir/Jetty_localhost_45037_datanode____.h3fq5i/webapp 2023-05-31 13:55:13,308 INFO [Listener at localhost.localdomain/35725] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45037 2023-05-31 13:55:13,316 WARN [Listener at localhost.localdomain/34027] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:55:13,323 WARN [Listener at localhost.localdomain/34027] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 13:55:13,323 WARN [ResponseProcessor for block BP-585026492-136.243.18.41-1685541299404:blk_1073741832_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-585026492-136.243.18.41-1685541299404:blk_1073741832_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 13:55:13,324 WARN [ResponseProcessor for block BP-585026492-136.243.18.41-1685541299404:blk_1073741829_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-585026492-136.243.18.41-1685541299404:blk_1073741829_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 13:55:13,324 WARN [ResponseProcessor for block BP-585026492-136.243.18.41-1685541299404:blk_1073741833_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-585026492-136.243.18.41-1685541299404:blk_1073741833_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 13:55:13,328 INFO [Listener at localhost.localdomain/34027] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 13:55:13,375 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe603ff66c8aa845c: Processing first storage report for DS-3b23d7cb-6456-4c30-9119-9abae75de53e from datanode 2affb1fe-d1e0-47db-8472-784d08639c5e 2023-05-31 13:55:13,376 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe603ff66c8aa845c: from storage DS-3b23d7cb-6456-4c30-9119-9abae75de53e node DatanodeRegistration(127.0.0.1:46041, datanodeUuid=2affb1fe-d1e0-47db-8472-784d08639c5e, infoPort=42839, infoSecurePort=0, ipcPort=34027, storageInfo=lv=-57;cid=testClusterID;nsid=432681125;c=1685541299404), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:55:13,376 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe603ff66c8aa845c: Processing first storage report for DS-768d311f-97cc-4849-9f8d-9ce13147dd1d from datanode 2affb1fe-d1e0-47db-8472-784d08639c5e 2023-05-31 13:55:13,376 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe603ff66c8aa845c: from storage DS-768d311f-97cc-4849-9f8d-9ce13147dd1d node DatanodeRegistration(127.0.0.1:46041, datanodeUuid=2affb1fe-d1e0-47db-8472-784d08639c5e, infoPort=42839, infoSecurePort=0, ipcPort=34027, storageInfo=lv=-57;cid=testClusterID;nsid=432681125;c=1685541299404), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-31 13:55:13,431 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1207258311_17 at /127.0.0.1:39542 [Receiving block BP-585026492-136.243.18.41-1685541299404:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:45107:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39542 dst: /127.0.0.1:45107 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:55:13,434 WARN [BP-585026492-136.243.18.41-1685541299404 heartbeating to localhost.localdomain/127.0.0.1:34707] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 13:55:13,431 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1207258311_17 at /127.0.0.1:39524 [Receiving block BP-585026492-136.243.18.41-1685541299404:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:45107:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39524 dst: /127.0.0.1:45107 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:55:13,431 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-895067350_17 at /127.0.0.1:39522 [Receiving block BP-585026492-136.243.18.41-1685541299404:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:45107:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39522 dst: /127.0.0.1:45107 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:55:13,434 WARN [BP-585026492-136.243.18.41-1685541299404 heartbeating to localhost.localdomain/127.0.0.1:34707] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-585026492-136.243.18.41-1685541299404 (Datanode Uuid eaca20e2-3460-4ade-ae1c-a8756561d942) service to localhost.localdomain/127.0.0.1:34707 2023-05-31 13:55:13,436 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/cluster_dfe169aa-93d3-9a1b-fc41-fecea5804fe5/dfs/data/data1/current/BP-585026492-136.243.18.41-1685541299404] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:55:13,436 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/cluster_dfe169aa-93d3-9a1b-fc41-fecea5804fe5/dfs/data/data2/current/BP-585026492-136.243.18.41-1685541299404] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:55:13,443 WARN [Listener at localhost.localdomain/34027] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 13:55:13,446 WARN [Listener at localhost.localdomain/34027] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:55:13,447 INFO [Listener at localhost.localdomain/34027] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:55:13,452 INFO [Listener at localhost.localdomain/34027] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/java.io.tmpdir/Jetty_localhost_39987_datanode____.wq6t6v/webapp 2023-05-31 13:55:13,527 INFO [Listener at localhost.localdomain/34027] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39987 2023-05-31 13:55:13,535 WARN [Listener at localhost.localdomain/37987] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:55:13,593 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x51da55780871e43f: Processing first storage report for DS-d15d605a-c411-44e8-bca7-c810ef84d428 from datanode eaca20e2-3460-4ade-ae1c-a8756561d942 2023-05-31 13:55:13,593 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x51da55780871e43f: from storage DS-d15d605a-c411-44e8-bca7-c810ef84d428 node DatanodeRegistration(127.0.0.1:39235, datanodeUuid=eaca20e2-3460-4ade-ae1c-a8756561d942, infoPort=45501, infoSecurePort=0, ipcPort=37987, storageInfo=lv=-57;cid=testClusterID;nsid=432681125;c=1685541299404), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:55:13,594 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x51da55780871e43f: Processing first storage report for DS-7ebe5b01-f189-43d7-ade4-51834b8b387b from datanode eaca20e2-3460-4ade-ae1c-a8756561d942 2023-05-31 13:55:13,594 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x51da55780871e43f: from storage DS-7ebe5b01-f189-43d7-ade4-51834b8b387b node DatanodeRegistration(127.0.0.1:39235, datanodeUuid=eaca20e2-3460-4ade-ae1c-a8756561d942, infoPort=45501, infoSecurePort=0, ipcPort=37987, storageInfo=lv=-57;cid=testClusterID;nsid=432681125;c=1685541299404), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:55:14,539 INFO [Listener at localhost.localdomain/37987] wal.TestLogRolling(481): Data Nodes restarted 2023-05-31 13:55:14,541 INFO [Listener at localhost.localdomain/37987] wal.AbstractTestLogRolling(233): Validated row row1002 2023-05-31 13:55:14,542 WARN [RS:0;jenkins-hbase17:43095.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=5, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45107,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:55:14,542 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C43095%2C1685541299939:(num 1685541300344) roll requested 2023-05-31 13:55:14,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43095] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45107,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:55:14,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43095] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Mutate size: 1.2 K connection: 136.243.18.41:36876 deadline: 1685541324541, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-05-31 13:55:14,552 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541300344 newFile=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541314543 2023-05-31 13:55:14,552 WARN [regionserver/jenkins-hbase17:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-05-31 13:55:14,552 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541300344 with entries=5, filesize=2.11 KB; new WAL /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541314543 2023-05-31 13:55:14,553 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46041,DS-3b23d7cb-6456-4c30-9119-9abae75de53e,DISK], DatanodeInfoWithStorage[127.0.0.1:39235,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK]] 2023-05-31 13:55:14,553 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45107,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:55:14,553 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541300344 is not closed yet, will try archiving it next time 2023-05-31 13:55:14,553 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541300344; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45107,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:55:26,560 INFO [Listener at localhost.localdomain/37987] wal.AbstractTestLogRolling(233): Validated row row1003 2023-05-31 13:55:28,563 WARN [Listener at localhost.localdomain/37987] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 13:55:28,565 WARN [ResponseProcessor for block BP-585026492-136.243.18.41-1685541299404:blk_1073741838_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-585026492-136.243.18.41-1685541299404:blk_1073741838_1017 java.io.IOException: Bad response ERROR for BP-585026492-136.243.18.41-1685541299404:blk_1073741838_1017 from datanode DatanodeInfoWithStorage[127.0.0.1:39235,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-31 13:55:28,565 WARN [DataStreamer for file /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541314543 block BP-585026492-136.243.18.41-1685541299404:blk_1073741838_1017] hdfs.DataStreamer(1548): Error Recovery for BP-585026492-136.243.18.41-1685541299404:blk_1073741838_1017 in pipeline [DatanodeInfoWithStorage[127.0.0.1:46041,DS-3b23d7cb-6456-4c30-9119-9abae75de53e,DISK], DatanodeInfoWithStorage[127.0.0.1:39235,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:39235,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK]) is bad. 2023-05-31 13:55:28,566 WARN [PacketResponder: BP-585026492-136.243.18.41-1685541299404:blk_1073741838_1017, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:39235]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:55:28,566 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1207258311_17 at /127.0.0.1:33238 [Receiving block BP-585026492-136.243.18.41-1685541299404:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:46041:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33238 dst: /127.0.0.1:46041 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:55:28,570 INFO [Listener at localhost.localdomain/37987] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 13:55:28,594 WARN [BP-585026492-136.243.18.41-1685541299404 heartbeating to localhost.localdomain/127.0.0.1:34707] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-585026492-136.243.18.41-1685541299404 (Datanode Uuid eaca20e2-3460-4ade-ae1c-a8756561d942) service to localhost.localdomain/127.0.0.1:34707 2023-05-31 13:55:28,595 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/cluster_dfe169aa-93d3-9a1b-fc41-fecea5804fe5/dfs/data/data1/current/BP-585026492-136.243.18.41-1685541299404] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:55:28,595 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/cluster_dfe169aa-93d3-9a1b-fc41-fecea5804fe5/dfs/data/data2/current/BP-585026492-136.243.18.41-1685541299404] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:55:28,675 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1207258311_17 at /127.0.0.1:35612 [Receiving block BP-585026492-136.243.18.41-1685541299404:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:39235:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35612 dst: /127.0.0.1:39235 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:55:28,685 WARN [Listener at localhost.localdomain/37987] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 13:55:28,688 WARN [Listener at localhost.localdomain/37987] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:55:28,689 INFO [Listener at localhost.localdomain/37987] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:55:28,695 INFO [Listener at localhost.localdomain/37987] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/java.io.tmpdir/Jetty_localhost_33827_datanode____.rv4k8s/webapp 2023-05-31 13:55:28,767 INFO [Listener at localhost.localdomain/37987] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33827 2023-05-31 13:55:28,776 WARN [Listener at localhost.localdomain/40939] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:55:28,780 WARN [Listener at localhost.localdomain/40939] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 13:55:28,781 WARN [ResponseProcessor for block BP-585026492-136.243.18.41-1685541299404:blk_1073741838_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-585026492-136.243.18.41-1685541299404:blk_1073741838_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 13:55:28,826 INFO [Listener at localhost.localdomain/40939] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 13:55:28,870 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa6018a02593d430c: Processing first storage report for DS-d15d605a-c411-44e8-bca7-c810ef84d428 from datanode eaca20e2-3460-4ade-ae1c-a8756561d942 2023-05-31 13:55:28,871 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa6018a02593d430c: from storage DS-d15d605a-c411-44e8-bca7-c810ef84d428 node DatanodeRegistration(127.0.0.1:39277, datanodeUuid=eaca20e2-3460-4ade-ae1c-a8756561d942, infoPort=33419, infoSecurePort=0, ipcPort=40939, storageInfo=lv=-57;cid=testClusterID;nsid=432681125;c=1685541299404), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:55:28,871 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa6018a02593d430c: Processing first storage report for DS-7ebe5b01-f189-43d7-ade4-51834b8b387b from datanode eaca20e2-3460-4ade-ae1c-a8756561d942 2023-05-31 13:55:28,871 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa6018a02593d430c: from storage DS-7ebe5b01-f189-43d7-ade4-51834b8b387b node DatanodeRegistration(127.0.0.1:39277, datanodeUuid=eaca20e2-3460-4ade-ae1c-a8756561d942, infoPort=33419, infoSecurePort=0, ipcPort=40939, storageInfo=lv=-57;cid=testClusterID;nsid=432681125;c=1685541299404), blocks: 8, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-31 13:55:28,931 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1207258311_17 at /127.0.0.1:47414 [Receiving block BP-585026492-136.243.18.41-1685541299404:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:46041:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47414 dst: /127.0.0.1:46041 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:55:28,933 WARN [BP-585026492-136.243.18.41-1685541299404 heartbeating to localhost.localdomain/127.0.0.1:34707] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 13:55:28,933 WARN [BP-585026492-136.243.18.41-1685541299404 heartbeating to localhost.localdomain/127.0.0.1:34707] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-585026492-136.243.18.41-1685541299404 (Datanode Uuid 2affb1fe-d1e0-47db-8472-784d08639c5e) service to localhost.localdomain/127.0.0.1:34707 2023-05-31 13:55:28,934 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/cluster_dfe169aa-93d3-9a1b-fc41-fecea5804fe5/dfs/data/data3/current/BP-585026492-136.243.18.41-1685541299404] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:55:28,934 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/cluster_dfe169aa-93d3-9a1b-fc41-fecea5804fe5/dfs/data/data4/current/BP-585026492-136.243.18.41-1685541299404] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:55:28,941 WARN [Listener at localhost.localdomain/40939] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 13:55:28,943 WARN [Listener at localhost.localdomain/40939] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:55:28,945 INFO [Listener at localhost.localdomain/40939] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:55:28,951 INFO [Listener at localhost.localdomain/40939] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/java.io.tmpdir/Jetty_localhost_36757_datanode____.w6frcz/webapp 2023-05-31 13:55:29,024 INFO [Listener at localhost.localdomain/40939] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36757 2023-05-31 13:55:29,037 WARN [Listener at localhost.localdomain/39739] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:55:29,090 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6cb6cd5a40ce0f6d: Processing first storage report for DS-3b23d7cb-6456-4c30-9119-9abae75de53e from datanode 2affb1fe-d1e0-47db-8472-784d08639c5e 2023-05-31 13:55:29,090 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6cb6cd5a40ce0f6d: from storage DS-3b23d7cb-6456-4c30-9119-9abae75de53e node DatanodeRegistration(127.0.0.1:42301, datanodeUuid=2affb1fe-d1e0-47db-8472-784d08639c5e, infoPort=41823, infoSecurePort=0, ipcPort=39739, storageInfo=lv=-57;cid=testClusterID;nsid=432681125;c=1685541299404), blocks: 6, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-31 13:55:29,090 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6cb6cd5a40ce0f6d: Processing first storage report for DS-768d311f-97cc-4849-9f8d-9ce13147dd1d from datanode 2affb1fe-d1e0-47db-8472-784d08639c5e 2023-05-31 13:55:29,090 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6cb6cd5a40ce0f6d: from storage DS-768d311f-97cc-4849-9f8d-9ce13147dd1d node DatanodeRegistration(127.0.0.1:42301, datanodeUuid=2affb1fe-d1e0-47db-8472-784d08639c5e, infoPort=41823, infoSecurePort=0, ipcPort=39739, storageInfo=lv=-57;cid=testClusterID;nsid=432681125;c=1685541299404), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:55:30,046 INFO [Listener at localhost.localdomain/39739] wal.TestLogRolling(498): Data Nodes restarted 2023-05-31 13:55:30,048 INFO [Listener at localhost.localdomain/39739] wal.AbstractTestLogRolling(233): Validated row row1004 2023-05-31 13:55:30,049 WARN [RS:0;jenkins-hbase17:43095.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=8, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:46041,DS-3b23d7cb-6456-4c30-9119-9abae75de53e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:55:30,050 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C43095%2C1685541299939:(num 1685541314543) roll requested 2023-05-31 13:55:30,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43095] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:46041,DS-3b23d7cb-6456-4c30-9119-9abae75de53e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:55:30,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43095] ipc.CallRunner(144): callId: 18 service: ClientService methodName: Mutate size: 1.2 K connection: 136.243.18.41:36876 deadline: 1685541340049, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-05-31 13:55:30,060 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541314543 newFile=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541330050 2023-05-31 13:55:30,060 WARN [regionserver/jenkins-hbase17:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-05-31 13:55:30,061 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541314543 with entries=2, filesize=2.37 KB; new WAL /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541330050 2023-05-31 13:55:30,061 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39277,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK], DatanodeInfoWithStorage[127.0.0.1:42301,DS-3b23d7cb-6456-4c30-9119-9abae75de53e,DISK]] 2023-05-31 13:55:30,061 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:46041,DS-3b23d7cb-6456-4c30-9119-9abae75de53e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:55:30,061 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541314543 is not closed yet, will try archiving it next time 2023-05-31 13:55:30,061 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541314543; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:46041,DS-3b23d7cb-6456-4c30-9119-9abae75de53e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:55:30,103 WARN [master/jenkins-hbase17:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45107,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:55:30,103 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C38553%2C1685541299888:(num 1685541300022) roll requested 2023-05-31 13:55:30,103 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45107,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:55:30,104 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45107,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:55:30,115 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-05-31 13:55:30,115 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/MasterData/WALs/jenkins-hbase17.apache.org,38553,1685541299888/jenkins-hbase17.apache.org%2C38553%2C1685541299888.1685541300022 with entries=88, filesize=43.81 KB; new WAL /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/MasterData/WALs/jenkins-hbase17.apache.org,38553,1685541299888/jenkins-hbase17.apache.org%2C38553%2C1685541299888.1685541330103 2023-05-31 13:55:30,115 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39277,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK], DatanodeInfoWithStorage[127.0.0.1:42301,DS-3b23d7cb-6456-4c30-9119-9abae75de53e,DISK]] 2023-05-31 13:55:30,116 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/MasterData/WALs/jenkins-hbase17.apache.org,38553,1685541299888/jenkins-hbase17.apache.org%2C38553%2C1685541299888.1685541300022 is not closed yet, will try archiving it next time 2023-05-31 13:55:30,116 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45107,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:55:30,116 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/MasterData/WALs/jenkins-hbase17.apache.org,38553,1685541299888/jenkins-hbase17.apache.org%2C38553%2C1685541299888.1685541300022; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45107,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:55:42,109 DEBUG [Listener at localhost.localdomain/39739] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541330050 newFile=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541342098 2023-05-31 13:55:42,111 INFO [Listener at localhost.localdomain/39739] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541330050 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541342098 2023-05-31 13:55:42,116 DEBUG [Listener at localhost.localdomain/39739] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42301,DS-3b23d7cb-6456-4c30-9119-9abae75de53e,DISK], DatanodeInfoWithStorage[127.0.0.1:39277,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK]] 2023-05-31 13:55:42,116 DEBUG [Listener at localhost.localdomain/39739] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541330050 is not closed yet, will try archiving it next time 2023-05-31 13:55:42,116 DEBUG [Listener at localhost.localdomain/39739] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541300344 2023-05-31 13:55:42,117 INFO [Listener at localhost.localdomain/39739] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541300344 2023-05-31 13:55:42,121 WARN [IPC Server handler 0 on default port 34707] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541300344 has not been closed. Lease recovery is in progress. RecoveryId = 1022 for block blk_1073741832_1016 2023-05-31 13:55:42,124 INFO [Listener at localhost.localdomain/39739] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541300344 after 7ms 2023-05-31 13:55:43,113 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@1960ed] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-585026492-136.243.18.41-1685541299404:blk_1073741832_1016, datanode=DatanodeInfoWithStorage[127.0.0.1:42301,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741832_1016, replica=ReplicaWaitingToBeRecovered, blk_1073741832_1008, RWR getNumBytes() = 2162 getBytesOnDisk() = 2162 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/cluster_dfe169aa-93d3-9a1b-fc41-fecea5804fe5/dfs/data/data4/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/cluster_dfe169aa-93d3-9a1b-fc41-fecea5804fe5/dfs/data/data4/current/BP-585026492-136.243.18.41-1685541299404/current/rbw/blk_1073741832 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:55:46,125 INFO [Listener at localhost.localdomain/39739] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541300344 after 4008ms 2023-05-31 13:55:46,125 DEBUG [Listener at localhost.localdomain/39739] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541300344 2023-05-31 13:55:46,136 DEBUG [Listener at localhost.localdomain/39739] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1685541300951/Put/vlen=176/seqid=0] 2023-05-31 13:55:46,136 DEBUG [Listener at localhost.localdomain/39739] wal.TestLogRolling(522): #4: [default/info:d/1685541300983/Put/vlen=9/seqid=0] 2023-05-31 13:55:46,136 DEBUG [Listener at localhost.localdomain/39739] wal.TestLogRolling(522): #5: [hbase/info:d/1685541301003/Put/vlen=7/seqid=0] 2023-05-31 13:55:46,136 DEBUG [Listener at localhost.localdomain/39739] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1685541301452/Put/vlen=232/seqid=0] 2023-05-31 13:55:46,137 DEBUG [Listener at localhost.localdomain/39739] wal.TestLogRolling(522): #4: [row1002/info:/1685541311095/Put/vlen=1045/seqid=0] 2023-05-31 13:55:46,137 DEBUG [Listener at localhost.localdomain/39739] wal.ProtobufLogReader(420): EOF at position 2162 2023-05-31 13:55:46,137 DEBUG [Listener at localhost.localdomain/39739] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541314543 2023-05-31 13:55:46,137 INFO [Listener at localhost.localdomain/39739] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541314543 2023-05-31 13:55:46,138 WARN [IPC Server handler 0 on default port 34707] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541314543 has not been closed. Lease recovery is in progress. RecoveryId = 1023 for block blk_1073741838_1018 2023-05-31 13:55:46,138 INFO [Listener at localhost.localdomain/39739] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541314543 after 1ms 2023-05-31 13:55:47,093 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@1c9896fc] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-585026492-136.243.18.41-1685541299404:blk_1073741838_1018, datanode=DatanodeInfoWithStorage[127.0.0.1:39277,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/cluster_dfe169aa-93d3-9a1b-fc41-fecea5804fe5/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/cluster_dfe169aa-93d3-9a1b-fc41-fecea5804fe5/dfs/data/data1/current/BP-585026492-136.243.18.41-1685541299404/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:348) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/cluster_dfe169aa-93d3-9a1b-fc41-fecea5804fe5/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/cluster_dfe169aa-93d3-9a1b-fc41-fecea5804fe5/dfs/data/data1/current/BP-585026492-136.243.18.41-1685541299404/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy43.initReplicaRecovery(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolTranslatorPB.initReplicaRecovery(InterDatanodeProtocolTranslatorPB.java:83) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) ... 4 more 2023-05-31 13:55:50,139 INFO [Listener at localhost.localdomain/39739] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541314543 after 4002ms 2023-05-31 13:55:50,139 DEBUG [Listener at localhost.localdomain/39739] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541314543 2023-05-31 13:55:50,144 DEBUG [Listener at localhost.localdomain/39739] wal.TestLogRolling(522): #6: [row1003/info:/1685541324554/Put/vlen=1045/seqid=0] 2023-05-31 13:55:50,144 DEBUG [Listener at localhost.localdomain/39739] wal.TestLogRolling(522): #7: [row1004/info:/1685541326561/Put/vlen=1045/seqid=0] 2023-05-31 13:55:50,144 DEBUG [Listener at localhost.localdomain/39739] wal.ProtobufLogReader(420): EOF at position 2425 2023-05-31 13:55:50,144 DEBUG [Listener at localhost.localdomain/39739] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541330050 2023-05-31 13:55:50,144 INFO [Listener at localhost.localdomain/39739] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541330050 2023-05-31 13:55:50,145 INFO [Listener at localhost.localdomain/39739] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541330050 after 1ms 2023-05-31 13:55:50,145 DEBUG [Listener at localhost.localdomain/39739] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541330050 2023-05-31 13:55:50,148 DEBUG [Listener at localhost.localdomain/39739] wal.TestLogRolling(522): #9: [row1005/info:/1685541340095/Put/vlen=1045/seqid=0] 2023-05-31 13:55:50,148 DEBUG [Listener at localhost.localdomain/39739] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541342098 2023-05-31 13:55:50,148 INFO [Listener at localhost.localdomain/39739] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541342098 2023-05-31 13:55:50,149 WARN [IPC Server handler 1 on default port 34707] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541342098 has not been closed. Lease recovery is in progress. RecoveryId = 1024 for block blk_1073741841_1021 2023-05-31 13:55:50,149 INFO [Listener at localhost.localdomain/39739] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541342098 after 1ms 2023-05-31 13:55:51,092 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-895067350_17 at /127.0.0.1:38340 [Receiving block BP-585026492-136.243.18.41-1685541299404:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:42301:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38340 dst: /127.0.0.1:42301 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:42301 remote=/127.0.0.1:38340]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:55:51,093 WARN [ResponseProcessor for block BP-585026492-136.243.18.41-1685541299404:blk_1073741841_1021] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-585026492-136.243.18.41-1685541299404:blk_1073741841_1021 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 13:55:51,094 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-895067350_17 at /127.0.0.1:37794 [Receiving block BP-585026492-136.243.18.41-1685541299404:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:39277:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37794 dst: /127.0.0.1:39277 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:55:51,094 WARN [DataStreamer for file /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541342098 block BP-585026492-136.243.18.41-1685541299404:blk_1073741841_1021] hdfs.DataStreamer(1548): Error Recovery for BP-585026492-136.243.18.41-1685541299404:blk_1073741841_1021 in pipeline [DatanodeInfoWithStorage[127.0.0.1:42301,DS-3b23d7cb-6456-4c30-9119-9abae75de53e,DISK], DatanodeInfoWithStorage[127.0.0.1:39277,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:42301,DS-3b23d7cb-6456-4c30-9119-9abae75de53e,DISK]) is bad. 2023-05-31 13:55:51,101 WARN [DataStreamer for file /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541342098 block BP-585026492-136.243.18.41-1685541299404:blk_1073741841_1021] hdfs.DataStreamer(823): DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-585026492-136.243.18.41-1685541299404:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:55:54,150 INFO [Listener at localhost.localdomain/39739] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541342098 after 4002ms 2023-05-31 13:55:54,150 DEBUG [Listener at localhost.localdomain/39739] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541342098 2023-05-31 13:55:54,155 DEBUG [Listener at localhost.localdomain/39739] wal.ProtobufLogReader(420): EOF at position 83 2023-05-31 13:55:54,156 INFO [Listener at localhost.localdomain/39739] regionserver.HRegion(2745): Flushing 43c99e42b7542e3674682dc0fda4052f 1/1 column families, dataSize=4.20 KB heapSize=4.75 KB 2023-05-31 13:55:54,157 WARN [RS:0;jenkins-hbase17:43095.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=11, requesting roll of WAL org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-585026492-136.243.18.41-1685541299404:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:55:54,158 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C43095%2C1685541299939:(num 1685541342098) roll requested 2023-05-31 13:55:54,158 DEBUG [Listener at localhost.localdomain/39739] regionserver.HRegion(2446): Flush status journal for 43c99e42b7542e3674682dc0fda4052f: 2023-05-31 13:55:54,158 INFO [Listener at localhost.localdomain/39739] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-585026492-136.243.18.41-1685541299404:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:55:54,159 INFO [Listener at localhost.localdomain/39739] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.96 KB heapSize=5.48 KB 2023-05-31 13:55:54,160 WARN [RS_OPEN_META-regionserver/jenkins-hbase17:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45107,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:55:54,160 DEBUG [Listener at localhost.localdomain/39739] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-31 13:55:54,161 INFO [Listener at localhost.localdomain/39739] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45107,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:55:54,162 INFO [Listener at localhost.localdomain/39739] regionserver.HRegion(2745): Flushing dade17a083cec2951119ec5f9a7fe315 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-31 13:55:54,163 DEBUG [Listener at localhost.localdomain/39739] regionserver.HRegion(2446): Flush status journal for dade17a083cec2951119ec5f9a7fe315: 2023-05-31 13:55:54,163 INFO [Listener at localhost.localdomain/39739] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-585026492-136.243.18.41-1685541299404:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:55:54,172 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-31 13:55:54,173 INFO [Listener at localhost.localdomain/39739] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-31 13:55:54,173 DEBUG [Listener at localhost.localdomain/39739] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x42836500 to 127.0.0.1:59404 2023-05-31 13:55:54,173 DEBUG [Listener at localhost.localdomain/39739] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:55:54,174 DEBUG [Listener at localhost.localdomain/39739] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-31 13:55:54,174 DEBUG [Listener at localhost.localdomain/39739] util.JVMClusterUtil(257): Found active master hash=1947538196, stopped=false 2023-05-31 13:55:54,174 INFO [Listener at localhost.localdomain/39739] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,38553,1685541299888 2023-05-31 13:55:54,175 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): regionserver:43095-0x1008184a1ba0001, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 13:55:54,175 INFO [Listener at localhost.localdomain/39739] procedure2.ProcedureExecutor(629): Stopping 2023-05-31 13:55:54,175 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 13:55:54,176 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:55:54,176 DEBUG [Listener at localhost.localdomain/39739] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3203134c to 127.0.0.1:59404 2023-05-31 13:55:54,176 DEBUG [Listener at localhost.localdomain/39739] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:55:54,176 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43095-0x1008184a1ba0001, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:55:54,176 INFO [Listener at localhost.localdomain/39739] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,43095,1685541299939' ***** 2023-05-31 13:55:54,177 INFO [Listener at localhost.localdomain/39739] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-31 13:55:54,177 INFO [RS:0;jenkins-hbase17:43095] regionserver.HeapMemoryManager(220): Stopping 2023-05-31 13:55:54,177 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-31 13:55:54,177 INFO [RS:0;jenkins-hbase17:43095] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-31 13:55:54,177 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:55:54,177 INFO [RS:0;jenkins-hbase17:43095] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-31 13:55:54,178 INFO [RS:0;jenkins-hbase17:43095] regionserver.HRegionServer(3303): Received CLOSE for 43c99e42b7542e3674682dc0fda4052f 2023-05-31 13:55:54,180 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541342098 newFile=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541354158 2023-05-31 13:55:54,180 WARN [regionserver/jenkins-hbase17:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL 2023-05-31 13:55:54,180 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541342098 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541354158 2023-05-31 13:55:54,180 WARN [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-585026492-136.243.18.41-1685541299404:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:55:54,180 INFO [RS:0;jenkins-hbase17:43095] regionserver.HRegionServer(3303): Received CLOSE for dade17a083cec2951119ec5f9a7fe315 2023-05-31 13:55:54,180 ERROR [regionserver/jenkins-hbase17:0.logRoller] wal.FSHLog(462): Close of WAL hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541342098 failed. Cause="Unexpected BlockUCState: BP-585026492-136.243.18.41-1685541299404:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) ", errors=3, hasUnflushedEntries=false 2023-05-31 13:55:54,180 INFO [RS:0;jenkins-hbase17:43095] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,43095,1685541299939 2023-05-31 13:55:54,180 ERROR [regionserver/jenkins-hbase17:0.logRoller] wal.FSHLog(426): Failed close of WAL writer hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541342098, unflushedEntries=0 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-585026492-136.243.18.41-1685541299404:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:55:54,180 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 43c99e42b7542e3674682dc0fda4052f, disabling compactions & flushes 2023-05-31 13:55:54,180 ERROR [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(221): Roll wal failed and waiting timeout, will not retry org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939/jenkins-hbase17.apache.org%2C43095%2C1685541299939.1685541342098, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-585026492-136.243.18.41-1685541299404:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:55:54,180 DEBUG [RS:0;jenkins-hbase17:43095] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3e1d19ea to 127.0.0.1:59404 2023-05-31 13:55:54,181 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. 2023-05-31 13:55:54,181 DEBUG [RS:0;jenkins-hbase17:43095] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:55:54,181 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. 2023-05-31 13:55:54,181 INFO [RS:0;jenkins-hbase17:43095] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-31 13:55:54,181 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. after waiting 0 ms 2023-05-31 13:55:54,181 INFO [RS:0;jenkins-hbase17:43095] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-31 13:55:54,181 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. 2023-05-31 13:55:54,181 INFO [RS:0;jenkins-hbase17:43095] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-31 13:55:54,181 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 43c99e42b7542e3674682dc0fda4052f 1/1 column families, dataSize=4.20 KB heapSize=4.98 KB 2023-05-31 13:55:54,181 INFO [RS:0;jenkins-hbase17:43095] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-31 13:55:54,181 WARN [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2760): Received unexpected exception trying to write ABORT_FLUSH marker to WAL: java.io.IOException: Cannot append; log is closed, regionName = TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1166) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:513) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendMarker(AbstractFSWAL.java:1228) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.doFullMarkerAppendTransaction(WALUtil.java:161) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeFlushMarker(WALUtil.java:89) at org.apache.hadoop.hbase.regionserver.HRegion.doAbortFlushToWAL(HRegion.java:2758) at org.apache.hadoop.hbase.regionserver.HRegion.internalPrepareFlushCache(HRegion.java:2711) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2578) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2552) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2543) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1733) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1554) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:105) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:102) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) in region TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. 2023-05-31 13:55:54,182 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 43c99e42b7542e3674682dc0fda4052f: 2023-05-31 13:55:54,182 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase17.apache.org,43095,1685541299939: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. ***** java.io.IOException: Cannot append; log is closed, regionName = TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1166) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:513) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendMarker(AbstractFSWAL.java:1228) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.doFullMarkerAppendTransaction(WALUtil.java:161) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeFlushMarker(WALUtil.java:89) at org.apache.hadoop.hbase.regionserver.HRegion.internalPrepareFlushCache(HRegion.java:2700) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2578) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2552) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2543) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1733) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1554) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:105) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:102) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:55:54,182 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-05-31 13:55:54,182 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-05-31 13:55:54,182 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939 2023-05-31 13:55:54,182 INFO [RS:0;jenkins-hbase17:43095] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-31 13:55:54,183 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:324) at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:151) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:105) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.writeWALTrailerAndMagic(ProtobufLogWriter.java:140) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.writeWALTrailer(AbstractProtobufLogWriter.java:234) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.close(ProtobufLogWriter.java:67) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doShutdown(FSHLog.java:492) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:951) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:946) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:55:54,183 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 13:55:54,183 DEBUG [RS:0;jenkins-hbase17:43095] regionserver.HRegionServer(1478): Online Regions={43c99e42b7542e3674682dc0fda4052f=TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f., 1588230740=hbase:meta,,1.1588230740, dade17a083cec2951119ec5f9a7fe315=hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315.} 2023-05-31 13:55:54,183 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 13:55:54,183 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-05-31 13:55:54,183 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 13:55:54,184 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-05-31 13:55:54,183 INFO [RS:0;jenkins-hbase17:43095] regionserver.HRegionServer(3303): Received CLOSE for 43c99e42b7542e3674682dc0fda4052f 2023-05-31 13:55:54,184 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-05-31 13:55:54,184 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/WALs/jenkins-hbase17.apache.org,43095,1685541299939 2023-05-31 13:55:54,184 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "Verbose": false, "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1107296256, "init": 524288000, "max": 2051014656, "used": 353425448 }, "NonHeapMemoryUsage": { "committed": 139485184, "init": 2555904, "max": -1, "used": 136974712 }, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-05-31 13:55:54,184 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 13:55:54,184 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45107,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:55:54,184 DEBUG [RS:0;jenkins-hbase17:43095] regionserver.HRegionServer(1504): Waiting on 1588230740, 43c99e42b7542e3674682dc0fda4052f, dade17a083cec2951119ec5f9a7fe315 2023-05-31 13:55:54,184 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 13:55:54,184 WARN [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(165): Failed to shutdown wal java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45107,DS-d15d605a-c411-44e8-bca7-c810ef84d428,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 13:55:54,184 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 13:55:54,184 DEBUG [regionserver/jenkins-hbase17:0.logRoller] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Failed log close in log roller 2023-05-31 13:55:54,184 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-31 13:55:54,184 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C43095%2C1685541299939.meta:.meta(num 1685541300499) roll requested 2023-05-31 13:55:54,184 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(874): WAL closed. Skipping rolling of writer 2023-05-31 13:55:54,184 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38553] master.MasterRpcServices(609): jenkins-hbase17.apache.org,43095,1685541299939 reported a fatal error: ***** ABORTING region server jenkins-hbase17.apache.org,43095,1685541299939: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. ***** Cause: java.io.IOException: Cannot append; log is closed, regionName = TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1166) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:513) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendMarker(AbstractFSWAL.java:1228) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.doFullMarkerAppendTransaction(WALUtil.java:161) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeFlushMarker(WALUtil.java:89) at org.apache.hadoop.hbase.regionserver.HRegion.internalPrepareFlushCache(HRegion.java:2700) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2578) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2552) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2543) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1733) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1554) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:105) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:102) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-05-31 13:55:54,186 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing dade17a083cec2951119ec5f9a7fe315, disabling compactions & flushes 2023-05-31 13:55:54,186 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315. 2023-05-31 13:55:54,186 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315. 2023-05-31 13:55:54,186 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315. after waiting 0 ms 2023-05-31 13:55:54,186 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315. 2023-05-31 13:55:54,186 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for dade17a083cec2951119ec5f9a7fe315: 2023-05-31 13:55:54,186 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315. 2023-05-31 13:55:54,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 43c99e42b7542e3674682dc0fda4052f, disabling compactions & flushes 2023-05-31 13:55:54,187 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. 2023-05-31 13:55:54,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. 2023-05-31 13:55:54,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. after waiting 0 ms 2023-05-31 13:55:54,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. 2023-05-31 13:55:54,187 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1825): Memstore data size is 4304 in region TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. 2023-05-31 13:55:54,188 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. 2023-05-31 13:55:54,188 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 43c99e42b7542e3674682dc0fda4052f: 2023-05-31 13:55:54,188 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRollOnPipelineRestart,,1685541301078.43c99e42b7542e3674682dc0fda4052f. 2023-05-31 13:55:54,212 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-05-31 13:55:54,213 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-05-31 13:55:54,215 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-31 13:55:54,384 INFO [RS:0;jenkins-hbase17:43095] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-31 13:55:54,384 INFO [RS:0;jenkins-hbase17:43095] regionserver.HRegionServer(3303): Received CLOSE for dade17a083cec2951119ec5f9a7fe315 2023-05-31 13:55:54,384 DEBUG [RS:0;jenkins-hbase17:43095] regionserver.HRegionServer(1504): Waiting on 1588230740, dade17a083cec2951119ec5f9a7fe315 2023-05-31 13:55:54,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing dade17a083cec2951119ec5f9a7fe315, disabling compactions & flushes 2023-05-31 13:55:54,384 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 13:55:54,385 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315. 2023-05-31 13:55:54,385 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 13:55:54,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315. 2023-05-31 13:55:54,385 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 13:55:54,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315. after waiting 0 ms 2023-05-31 13:55:54,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315. 2023-05-31 13:55:54,385 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 13:55:54,385 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 13:55:54,385 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1825): Memstore data size is 78 in region hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315. 2023-05-31 13:55:54,385 ERROR [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1825): Memstore data size is 3028 in region hbase:meta,,1.1588230740 2023-05-31 13:55:54,385 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-31 13:55:54,386 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315. 2023-05-31 13:55:54,386 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 13:55:54,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for dade17a083cec2951119ec5f9a7fe315: 2023-05-31 13:55:54,386 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 13:55:54,386 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-31 13:55:54,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685541300564.dade17a083cec2951119ec5f9a7fe315. 2023-05-31 13:55:54,585 INFO [RS:0;jenkins-hbase17:43095] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,43095,1685541299939; all regions closed. 2023-05-31 13:55:54,585 DEBUG [RS:0;jenkins-hbase17:43095] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:55:54,585 INFO [RS:0;jenkins-hbase17:43095] regionserver.LeaseManager(133): Closed leases 2023-05-31 13:55:54,585 INFO [RS:0;jenkins-hbase17:43095] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-31 13:55:54,585 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 13:55:54,586 INFO [RS:0;jenkins-hbase17:43095] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:43095 2023-05-31 13:55:54,588 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 13:55:54,588 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): regionserver:43095-0x1008184a1ba0001, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,43095,1685541299939 2023-05-31 13:55:54,588 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): regionserver:43095-0x1008184a1ba0001, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 13:55:54,589 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,43095,1685541299939] 2023-05-31 13:55:54,589 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,43095,1685541299939; numProcessing=1 2023-05-31 13:55:54,589 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,43095,1685541299939 already deleted, retry=false 2023-05-31 13:55:54,590 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,43095,1685541299939 expired; onlineServers=0 2023-05-31 13:55:54,590 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,38553,1685541299888' ***** 2023-05-31 13:55:54,590 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-31 13:55:54,590 DEBUG [M:0;jenkins-hbase17:38553] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@702a894e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-05-31 13:55:54,590 INFO [M:0;jenkins-hbase17:38553] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,38553,1685541299888 2023-05-31 13:55:54,590 INFO [M:0;jenkins-hbase17:38553] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,38553,1685541299888; all regions closed. 2023-05-31 13:55:54,591 DEBUG [M:0;jenkins-hbase17:38553] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:55:54,591 DEBUG [M:0;jenkins-hbase17:38553] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-31 13:55:54,591 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-31 13:55:54,591 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1685541300109] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1685541300109,5,FailOnTimeoutGroup] 2023-05-31 13:55:54,591 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1685541300109] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1685541300109,5,FailOnTimeoutGroup] 2023-05-31 13:55:54,591 DEBUG [M:0;jenkins-hbase17:38553] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-31 13:55:54,592 INFO [M:0;jenkins-hbase17:38553] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-31 13:55:54,592 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-31 13:55:54,592 INFO [M:0;jenkins-hbase17:38553] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-31 13:55:54,592 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:55:54,592 INFO [M:0;jenkins-hbase17:38553] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [] on shutdown 2023-05-31 13:55:54,592 DEBUG [M:0;jenkins-hbase17:38553] master.HMaster(1512): Stopping service threads 2023-05-31 13:55:54,593 INFO [M:0;jenkins-hbase17:38553] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-31 13:55:54,593 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 13:55:54,593 ERROR [M:0;jenkins-hbase17:38553] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-31 13:55:54,593 INFO [M:0;jenkins-hbase17:38553] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-31 13:55:54,593 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-31 13:55:54,593 DEBUG [M:0;jenkins-hbase17:38553] zookeeper.ZKUtil(398): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-31 13:55:54,594 WARN [M:0;jenkins-hbase17:38553] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-31 13:55:54,594 INFO [M:0;jenkins-hbase17:38553] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-31 13:55:54,594 INFO [M:0;jenkins-hbase17:38553] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-31 13:55:54,594 DEBUG [M:0;jenkins-hbase17:38553] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 13:55:54,594 INFO [M:0;jenkins-hbase17:38553] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:55:54,594 DEBUG [M:0;jenkins-hbase17:38553] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:55:54,594 DEBUG [M:0;jenkins-hbase17:38553] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 13:55:54,594 DEBUG [M:0;jenkins-hbase17:38553] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:55:54,594 INFO [M:0;jenkins-hbase17:38553] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.18 KB heapSize=45.83 KB 2023-05-31 13:55:54,610 INFO [M:0;jenkins-hbase17:38553] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.18 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/2df7cd3924ec45578e43201f743f9552 2023-05-31 13:55:54,616 DEBUG [M:0;jenkins-hbase17:38553] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/2df7cd3924ec45578e43201f743f9552 as hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2df7cd3924ec45578e43201f743f9552 2023-05-31 13:55:54,621 INFO [M:0;jenkins-hbase17:38553] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34707/user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2df7cd3924ec45578e43201f743f9552, entries=11, sequenceid=92, filesize=7.0 K 2023-05-31 13:55:54,622 INFO [M:0;jenkins-hbase17:38553] regionserver.HRegion(2948): Finished flush of dataSize ~38.18 KB/39101, heapSize ~45.81 KB/46912, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 28ms, sequenceid=92, compaction requested=false 2023-05-31 13:55:54,623 INFO [M:0;jenkins-hbase17:38553] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:55:54,623 DEBUG [M:0;jenkins-hbase17:38553] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 13:55:54,623 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/b0617cf0-cada-a2a7-1f41-b4136dba909b/MasterData/WALs/jenkins-hbase17.apache.org,38553,1685541299888 2023-05-31 13:55:54,627 INFO [M:0;jenkins-hbase17:38553] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-31 13:55:54,627 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 13:55:54,628 INFO [M:0;jenkins-hbase17:38553] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:38553 2023-05-31 13:55:54,630 DEBUG [M:0;jenkins-hbase17:38553] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,38553,1685541299888 already deleted, retry=false 2023-05-31 13:55:54,689 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): regionserver:43095-0x1008184a1ba0001, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:55:54,690 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): regionserver:43095-0x1008184a1ba0001, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:55:54,689 INFO [RS:0;jenkins-hbase17:43095] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,43095,1685541299939; zookeeper connection closed. 2023-05-31 13:55:54,690 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6a25460f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6a25460f 2023-05-31 13:55:54,693 INFO [Listener at localhost.localdomain/39739] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-31 13:55:54,790 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:55:54,790 INFO [M:0;jenkins-hbase17:38553] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,38553,1685541299888; zookeeper connection closed. 2023-05-31 13:55:54,790 DEBUG [Listener at localhost.localdomain/35725-EventThread] zookeeper.ZKWatcher(600): master:38553-0x1008184a1ba0000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:55:54,791 WARN [Listener at localhost.localdomain/39739] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 13:55:54,797 INFO [Listener at localhost.localdomain/39739] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 13:55:54,902 WARN [BP-585026492-136.243.18.41-1685541299404 heartbeating to localhost.localdomain/127.0.0.1:34707] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 13:55:54,902 WARN [BP-585026492-136.243.18.41-1685541299404 heartbeating to localhost.localdomain/127.0.0.1:34707] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-585026492-136.243.18.41-1685541299404 (Datanode Uuid 2affb1fe-d1e0-47db-8472-784d08639c5e) service to localhost.localdomain/127.0.0.1:34707 2023-05-31 13:55:54,902 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/cluster_dfe169aa-93d3-9a1b-fc41-fecea5804fe5/dfs/data/data3/current/BP-585026492-136.243.18.41-1685541299404] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:55:54,902 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/cluster_dfe169aa-93d3-9a1b-fc41-fecea5804fe5/dfs/data/data4/current/BP-585026492-136.243.18.41-1685541299404] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:55:54,905 WARN [Listener at localhost.localdomain/39739] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 13:55:54,909 INFO [Listener at localhost.localdomain/39739] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 13:55:55,012 WARN [BP-585026492-136.243.18.41-1685541299404 heartbeating to localhost.localdomain/127.0.0.1:34707] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 13:55:55,012 WARN [BP-585026492-136.243.18.41-1685541299404 heartbeating to localhost.localdomain/127.0.0.1:34707] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-585026492-136.243.18.41-1685541299404 (Datanode Uuid eaca20e2-3460-4ade-ae1c-a8756561d942) service to localhost.localdomain/127.0.0.1:34707 2023-05-31 13:55:55,013 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/cluster_dfe169aa-93d3-9a1b-fc41-fecea5804fe5/dfs/data/data1/current/BP-585026492-136.243.18.41-1685541299404] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:55:55,013 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/cluster_dfe169aa-93d3-9a1b-fc41-fecea5804fe5/dfs/data/data2/current/BP-585026492-136.243.18.41-1685541299404] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:55:55,025 INFO [Listener at localhost.localdomain/39739] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-31 13:55:55,135 INFO [Listener at localhost.localdomain/39739] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-31 13:55:55,149 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-31 13:55:55,157 INFO [Listener at localhost.localdomain/39739] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=87 (was 78) - Thread LEAK? -, OpenFileDescriptor=460 (was 460), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=133 (was 180), ProcessCount=171 (was 171), AvailableMemoryMB=7388 (was 7546) 2023-05-31 13:55:55,165 INFO [Listener at localhost.localdomain/39739] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=87, OpenFileDescriptor=460, MaxFileDescriptor=60000, SystemLoadAverage=133, ProcessCount=171, AvailableMemoryMB=7388 2023-05-31 13:55:55,165 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-31 13:55:55,165 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/hadoop.log.dir so I do NOT create it in target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a 2023-05-31 13:55:55,165 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/143f337a-a5c6-6b42-b235-0a7969b1e735/hadoop.tmp.dir so I do NOT create it in target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a 2023-05-31 13:55:55,165 INFO [Listener at localhost.localdomain/39739] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/cluster_0c33f31b-66e9-d379-6358-1cdd9909ea28, deleteOnExit=true 2023-05-31 13:55:55,165 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-31 13:55:55,166 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/test.cache.data in system properties and HBase conf 2023-05-31 13:55:55,166 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/hadoop.tmp.dir in system properties and HBase conf 2023-05-31 13:55:55,166 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/hadoop.log.dir in system properties and HBase conf 2023-05-31 13:55:55,166 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-31 13:55:55,166 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-31 13:55:55,166 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-31 13:55:55,166 DEBUG [Listener at localhost.localdomain/39739] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-31 13:55:55,166 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-31 13:55:55,167 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-31 13:55:55,167 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-31 13:55:55,167 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 13:55:55,167 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-31 13:55:55,167 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-31 13:55:55,167 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 13:55:55,167 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 13:55:55,167 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-31 13:55:55,167 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/nfs.dump.dir in system properties and HBase conf 2023-05-31 13:55:55,167 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/java.io.tmpdir in system properties and HBase conf 2023-05-31 13:55:55,167 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 13:55:55,168 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-31 13:55:55,168 INFO [Listener at localhost.localdomain/39739] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-31 13:55:55,169 WARN [Listener at localhost.localdomain/39739] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 13:55:55,170 WARN [Listener at localhost.localdomain/39739] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 13:55:55,170 WARN [Listener at localhost.localdomain/39739] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 13:55:55,193 WARN [Listener at localhost.localdomain/39739] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:55:55,195 INFO [Listener at localhost.localdomain/39739] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:55:55,200 INFO [Listener at localhost.localdomain/39739] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/java.io.tmpdir/Jetty_localhost_localdomain_34427_hdfs____91y8gi/webapp 2023-05-31 13:55:55,272 INFO [Listener at localhost.localdomain/39739] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:34427 2023-05-31 13:55:55,274 WARN [Listener at localhost.localdomain/39739] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 13:55:55,275 WARN [Listener at localhost.localdomain/39739] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 13:55:55,275 WARN [Listener at localhost.localdomain/39739] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 13:55:55,304 WARN [Listener at localhost.localdomain/35345] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:55:55,317 WARN [Listener at localhost.localdomain/35345] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 13:55:55,321 WARN [Listener at localhost.localdomain/35345] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:55:55,322 INFO [Listener at localhost.localdomain/35345] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:55:55,329 INFO [Listener at localhost.localdomain/35345] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/java.io.tmpdir/Jetty_localhost_39279_datanode____4hvgwl/webapp 2023-05-31 13:55:55,402 INFO [Listener at localhost.localdomain/35345] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39279 2023-05-31 13:55:55,408 WARN [Listener at localhost.localdomain/43883] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:55:55,421 WARN [Listener at localhost.localdomain/43883] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 13:55:55,426 WARN [Listener at localhost.localdomain/43883] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:55:55,427 INFO [Listener at localhost.localdomain/43883] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:55:55,431 INFO [Listener at localhost.localdomain/43883] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/java.io.tmpdir/Jetty_localhost_46009_datanode____.a5rwo2/webapp 2023-05-31 13:55:55,480 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfca8eb8c14b6347e: Processing first storage report for DS-749fe520-e75e-405a-a5ea-133f865916e2 from datanode 2533f778-ea21-438f-ade5-b85e37ed9690 2023-05-31 13:55:55,480 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfca8eb8c14b6347e: from storage DS-749fe520-e75e-405a-a5ea-133f865916e2 node DatanodeRegistration(127.0.0.1:40419, datanodeUuid=2533f778-ea21-438f-ade5-b85e37ed9690, infoPort=43521, infoSecurePort=0, ipcPort=43883, storageInfo=lv=-57;cid=testClusterID;nsid=1322638984;c=1685541355172), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:55:55,480 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfca8eb8c14b6347e: Processing first storage report for DS-58e5151d-4b27-4cfe-993b-b2b8e5049c1f from datanode 2533f778-ea21-438f-ade5-b85e37ed9690 2023-05-31 13:55:55,480 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfca8eb8c14b6347e: from storage DS-58e5151d-4b27-4cfe-993b-b2b8e5049c1f node DatanodeRegistration(127.0.0.1:40419, datanodeUuid=2533f778-ea21-438f-ade5-b85e37ed9690, infoPort=43521, infoSecurePort=0, ipcPort=43883, storageInfo=lv=-57;cid=testClusterID;nsid=1322638984;c=1685541355172), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:55:55,511 INFO [Listener at localhost.localdomain/43883] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46009 2023-05-31 13:55:55,516 WARN [Listener at localhost.localdomain/41065] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:55:55,568 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5f6bca44e83799a7: Processing first storage report for DS-bffb245e-8987-4142-9f54-74754d3b92d2 from datanode 3d33bf98-5a35-4ffe-98b7-ad9436d61ee1 2023-05-31 13:55:55,568 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5f6bca44e83799a7: from storage DS-bffb245e-8987-4142-9f54-74754d3b92d2 node DatanodeRegistration(127.0.0.1:42763, datanodeUuid=3d33bf98-5a35-4ffe-98b7-ad9436d61ee1, infoPort=42443, infoSecurePort=0, ipcPort=41065, storageInfo=lv=-57;cid=testClusterID;nsid=1322638984;c=1685541355172), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-31 13:55:55,568 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5f6bca44e83799a7: Processing first storage report for DS-4efa7a0a-b6d1-428d-bfdc-c1faf54669c8 from datanode 3d33bf98-5a35-4ffe-98b7-ad9436d61ee1 2023-05-31 13:55:55,568 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5f6bca44e83799a7: from storage DS-4efa7a0a-b6d1-428d-bfdc-c1faf54669c8 node DatanodeRegistration(127.0.0.1:42763, datanodeUuid=3d33bf98-5a35-4ffe-98b7-ad9436d61ee1, infoPort=42443, infoSecurePort=0, ipcPort=41065, storageInfo=lv=-57;cid=testClusterID;nsid=1322638984;c=1685541355172), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:55:55,624 DEBUG [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a 2023-05-31 13:55:55,627 INFO [Listener at localhost.localdomain/41065] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/cluster_0c33f31b-66e9-d379-6358-1cdd9909ea28/zookeeper_0, clientPort=62916, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/cluster_0c33f31b-66e9-d379-6358-1cdd9909ea28/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/cluster_0c33f31b-66e9-d379-6358-1cdd9909ea28/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-31 13:55:55,628 INFO [Listener at localhost.localdomain/41065] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=62916 2023-05-31 13:55:55,628 INFO [Listener at localhost.localdomain/41065] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:55:55,629 INFO [Listener at localhost.localdomain/41065] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:55:55,649 INFO [Listener at localhost.localdomain/41065] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb with version=8 2023-05-31 13:55:55,649 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/hbase-staging 2023-05-31 13:55:55,650 INFO [Listener at localhost.localdomain/41065] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-05-31 13:55:55,651 INFO [Listener at localhost.localdomain/41065] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:55:55,651 INFO [Listener at localhost.localdomain/41065] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 13:55:55,651 INFO [Listener at localhost.localdomain/41065] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 13:55:55,651 INFO [Listener at localhost.localdomain/41065] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:55:55,651 INFO [Listener at localhost.localdomain/41065] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 13:55:55,651 INFO [Listener at localhost.localdomain/41065] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 13:55:55,653 INFO [Listener at localhost.localdomain/41065] ipc.NettyRpcServer(120): Bind to /136.243.18.41:38103 2023-05-31 13:55:55,653 INFO [Listener at localhost.localdomain/41065] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:55:55,654 INFO [Listener at localhost.localdomain/41065] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:55:55,655 INFO [Listener at localhost.localdomain/41065] zookeeper.RecoverableZooKeeper(93): Process identifier=master:38103 connecting to ZooKeeper ensemble=127.0.0.1:62916 2023-05-31 13:55:55,660 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:381030x0, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 13:55:55,661 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:38103-0x10081857b880000 connected 2023-05-31 13:55:55,674 DEBUG [Listener at localhost.localdomain/41065] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 13:55:55,675 DEBUG [Listener at localhost.localdomain/41065] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:55:55,675 DEBUG [Listener at localhost.localdomain/41065] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 13:55:55,676 DEBUG [Listener at localhost.localdomain/41065] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38103 2023-05-31 13:55:55,676 DEBUG [Listener at localhost.localdomain/41065] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38103 2023-05-31 13:55:55,677 DEBUG [Listener at localhost.localdomain/41065] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38103 2023-05-31 13:55:55,677 DEBUG [Listener at localhost.localdomain/41065] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38103 2023-05-31 13:55:55,677 DEBUG [Listener at localhost.localdomain/41065] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38103 2023-05-31 13:55:55,677 INFO [Listener at localhost.localdomain/41065] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb, hbase.cluster.distributed=false 2023-05-31 13:55:55,692 INFO [Listener at localhost.localdomain/41065] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-05-31 13:55:55,692 INFO [Listener at localhost.localdomain/41065] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:55:55,692 INFO [Listener at localhost.localdomain/41065] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 13:55:55,692 INFO [Listener at localhost.localdomain/41065] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 13:55:55,692 INFO [Listener at localhost.localdomain/41065] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:55:55,692 INFO [Listener at localhost.localdomain/41065] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 13:55:55,692 INFO [Listener at localhost.localdomain/41065] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 13:55:55,694 INFO [Listener at localhost.localdomain/41065] ipc.NettyRpcServer(120): Bind to /136.243.18.41:33397 2023-05-31 13:55:55,695 INFO [Listener at localhost.localdomain/41065] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-31 13:55:55,696 DEBUG [Listener at localhost.localdomain/41065] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-31 13:55:55,697 INFO [Listener at localhost.localdomain/41065] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:55:55,698 INFO [Listener at localhost.localdomain/41065] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:55:55,699 INFO [Listener at localhost.localdomain/41065] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33397 connecting to ZooKeeper ensemble=127.0.0.1:62916 2023-05-31 13:55:55,703 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:333970x0, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 13:55:55,704 DEBUG [Listener at localhost.localdomain/41065] zookeeper.ZKUtil(164): regionserver:333970x0, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 13:55:55,704 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33397-0x10081857b880001 connected 2023-05-31 13:55:55,705 DEBUG [Listener at localhost.localdomain/41065] zookeeper.ZKUtil(164): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:55:55,705 DEBUG [Listener at localhost.localdomain/41065] zookeeper.ZKUtil(164): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 13:55:55,706 DEBUG [Listener at localhost.localdomain/41065] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33397 2023-05-31 13:55:55,707 DEBUG [Listener at localhost.localdomain/41065] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33397 2023-05-31 13:55:55,708 DEBUG [Listener at localhost.localdomain/41065] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33397 2023-05-31 13:55:55,712 DEBUG [Listener at localhost.localdomain/41065] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33397 2023-05-31 13:55:55,712 DEBUG [Listener at localhost.localdomain/41065] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33397 2023-05-31 13:55:55,714 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,38103,1685541355650 2023-05-31 13:55:55,716 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 13:55:55,716 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,38103,1685541355650 2023-05-31 13:55:55,717 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 13:55:55,717 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 13:55:55,717 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:55:55,718 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 13:55:55,718 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 13:55:55,719 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,38103,1685541355650 from backup master directory 2023-05-31 13:55:55,719 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,38103,1685541355650 2023-05-31 13:55:55,719 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 13:55:55,719 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 13:55:55,719 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,38103,1685541355650 2023-05-31 13:55:55,737 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/hbase.id with ID: 3689c683-a698-4406-95ac-6faa62f58764 2023-05-31 13:55:55,749 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:55:55,751 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:55:55,758 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x5fd02623 to 127.0.0.1:62916 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 13:55:55,763 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5900217b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 13:55:55,763 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 13:55:55,764 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-31 13:55:55,764 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 13:55:55,765 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/MasterData/data/master/store-tmp 2023-05-31 13:55:55,773 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:55:55,773 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 13:55:55,773 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:55:55,773 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:55:55,773 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 13:55:55,774 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:55:55,774 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:55:55,774 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 13:55:55,774 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/MasterData/WALs/jenkins-hbase17.apache.org,38103,1685541355650 2023-05-31 13:55:55,777 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C38103%2C1685541355650, suffix=, logDir=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/MasterData/WALs/jenkins-hbase17.apache.org,38103,1685541355650, archiveDir=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/MasterData/oldWALs, maxLogs=10 2023-05-31 13:55:55,785 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/MasterData/WALs/jenkins-hbase17.apache.org,38103,1685541355650/jenkins-hbase17.apache.org%2C38103%2C1685541355650.1685541355777 2023-05-31 13:55:55,785 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40419,DS-749fe520-e75e-405a-a5ea-133f865916e2,DISK], DatanodeInfoWithStorage[127.0.0.1:42763,DS-bffb245e-8987-4142-9f54-74754d3b92d2,DISK]] 2023-05-31 13:55:55,785 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-31 13:55:55,786 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:55:55,786 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:55:55,786 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:55:55,787 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:55:55,789 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-31 13:55:55,789 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-31 13:55:55,790 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:55:55,791 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:55:55,791 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:55:55,794 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:55:55,796 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 13:55:55,797 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=844778, jitterRate=0.07419176399707794}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 13:55:55,797 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 13:55:55,797 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-31 13:55:55,798 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-31 13:55:55,798 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-31 13:55:55,799 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-31 13:55:55,799 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-31 13:55:55,799 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-31 13:55:55,799 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-31 13:55:55,800 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-31 13:55:55,801 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-31 13:55:55,810 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-31 13:55:55,810 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-31 13:55:55,811 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-31 13:55:55,811 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-31 13:55:55,812 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-31 13:55:55,813 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:55:55,814 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-31 13:55:55,814 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-31 13:55:55,815 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-31 13:55:55,816 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 13:55:55,816 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 13:55:55,816 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:55:55,816 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,38103,1685541355650, sessionid=0x10081857b880000, setting cluster-up flag (Was=false) 2023-05-31 13:55:55,819 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:55:55,821 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-31 13:55:55,822 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,38103,1685541355650 2023-05-31 13:55:55,824 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:55:55,827 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-31 13:55:55,828 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,38103,1685541355650 2023-05-31 13:55:55,828 WARN [master/jenkins-hbase17:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/.hbase-snapshot/.tmp 2023-05-31 13:55:55,831 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-31 13:55:55,831 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-05-31 13:55:55,832 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-05-31 13:55:55,832 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-05-31 13:55:55,832 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-05-31 13:55:55,832 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-05-31 13:55:55,832 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:55:55,832 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-05-31 13:55:55,832 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:55:55,833 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685541385833 2023-05-31 13:55:55,834 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-31 13:55:55,834 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-31 13:55:55,834 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-31 13:55:55,834 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-31 13:55:55,834 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-31 13:55:55,834 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-31 13:55:55,834 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:55,835 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-31 13:55:55,835 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-31 13:55:55,835 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-31 13:55:55,835 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 13:55:55,835 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-31 13:55:55,836 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-31 13:55:55,836 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-31 13:55:55,836 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1685541355836,5,FailOnTimeoutGroup] 2023-05-31 13:55:55,836 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1685541355836,5,FailOnTimeoutGroup] 2023-05-31 13:55:55,836 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:55,836 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-31 13:55:55,837 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:55,837 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:55,838 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 13:55:55,851 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 13:55:55,851 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 13:55:55,851 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb 2023-05-31 13:55:55,861 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:55:55,864 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 13:55:55,866 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/1588230740/info 2023-05-31 13:55:55,866 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 13:55:55,867 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:55:55,867 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 13:55:55,868 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/1588230740/rep_barrier 2023-05-31 13:55:55,869 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 13:55:55,869 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:55:55,869 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 13:55:55,871 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/1588230740/table 2023-05-31 13:55:55,871 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 13:55:55,872 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:55:55,873 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/1588230740 2023-05-31 13:55:55,873 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/1588230740 2023-05-31 13:55:55,875 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 13:55:55,877 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 13:55:55,878 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 13:55:55,879 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=834004, jitterRate=0.060491129755973816}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 13:55:55,879 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 13:55:55,879 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 13:55:55,879 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 13:55:55,879 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 13:55:55,879 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 13:55:55,879 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 13:55:55,879 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 13:55:55,880 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 13:55:55,881 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 13:55:55,881 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-31 13:55:55,881 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-31 13:55:55,883 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-31 13:55:55,884 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-31 13:55:55,915 INFO [RS:0;jenkins-hbase17:33397] regionserver.HRegionServer(951): ClusterId : 3689c683-a698-4406-95ac-6faa62f58764 2023-05-31 13:55:55,916 DEBUG [RS:0;jenkins-hbase17:33397] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-31 13:55:55,919 DEBUG [RS:0;jenkins-hbase17:33397] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-31 13:55:55,919 DEBUG [RS:0;jenkins-hbase17:33397] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-31 13:55:55,921 DEBUG [RS:0;jenkins-hbase17:33397] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-31 13:55:55,922 DEBUG [RS:0;jenkins-hbase17:33397] zookeeper.ReadOnlyZKClient(139): Connect 0x554d99ed to 127.0.0.1:62916 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 13:55:55,926 DEBUG [RS:0;jenkins-hbase17:33397] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5b8a6d34, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 13:55:55,926 DEBUG [RS:0;jenkins-hbase17:33397] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5fb5f29d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-05-31 13:55:55,933 DEBUG [RS:0;jenkins-hbase17:33397] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:33397 2023-05-31 13:55:55,933 INFO [RS:0;jenkins-hbase17:33397] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-31 13:55:55,933 INFO [RS:0;jenkins-hbase17:33397] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-31 13:55:55,934 DEBUG [RS:0;jenkins-hbase17:33397] regionserver.HRegionServer(1022): About to register with Master. 2023-05-31 13:55:55,934 INFO [RS:0;jenkins-hbase17:33397] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase17.apache.org,38103,1685541355650 with isa=jenkins-hbase17.apache.org/136.243.18.41:33397, startcode=1685541355691 2023-05-31 13:55:55,934 DEBUG [RS:0;jenkins-hbase17:33397] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-31 13:55:55,940 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:41207, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-05-31 13:55:55,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:55:55,942 DEBUG [RS:0;jenkins-hbase17:33397] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb 2023-05-31 13:55:55,942 DEBUG [RS:0;jenkins-hbase17:33397] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:35345 2023-05-31 13:55:55,942 DEBUG [RS:0;jenkins-hbase17:33397] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-31 13:55:55,943 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 13:55:55,944 DEBUG [RS:0;jenkins-hbase17:33397] zookeeper.ZKUtil(162): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:55:55,944 WARN [RS:0;jenkins-hbase17:33397] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 13:55:55,944 INFO [RS:0;jenkins-hbase17:33397] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 13:55:55,945 DEBUG [RS:0;jenkins-hbase17:33397] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/WALs/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:55:55,945 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,33397,1685541355691] 2023-05-31 13:55:55,949 DEBUG [RS:0;jenkins-hbase17:33397] zookeeper.ZKUtil(162): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:55:55,950 DEBUG [RS:0;jenkins-hbase17:33397] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-31 13:55:55,950 INFO [RS:0;jenkins-hbase17:33397] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-31 13:55:55,952 INFO [RS:0;jenkins-hbase17:33397] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-31 13:55:55,952 INFO [RS:0;jenkins-hbase17:33397] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-31 13:55:55,952 INFO [RS:0;jenkins-hbase17:33397] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:55,956 INFO [RS:0;jenkins-hbase17:33397] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-31 13:55:55,958 INFO [RS:0;jenkins-hbase17:33397] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:55,958 DEBUG [RS:0;jenkins-hbase17:33397] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:55:55,958 DEBUG [RS:0;jenkins-hbase17:33397] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:55:55,958 DEBUG [RS:0;jenkins-hbase17:33397] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:55:55,958 DEBUG [RS:0;jenkins-hbase17:33397] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:55:55,958 DEBUG [RS:0;jenkins-hbase17:33397] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:55:55,958 DEBUG [RS:0;jenkins-hbase17:33397] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-05-31 13:55:55,958 DEBUG [RS:0;jenkins-hbase17:33397] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:55:55,958 DEBUG [RS:0;jenkins-hbase17:33397] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:55:55,958 DEBUG [RS:0;jenkins-hbase17:33397] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:55:55,958 DEBUG [RS:0;jenkins-hbase17:33397] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:55:55,959 INFO [RS:0;jenkins-hbase17:33397] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:55,959 INFO [RS:0;jenkins-hbase17:33397] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:55,960 INFO [RS:0;jenkins-hbase17:33397] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:55,969 INFO [RS:0;jenkins-hbase17:33397] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-31 13:55:55,969 INFO [RS:0;jenkins-hbase17:33397] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,33397,1685541355691-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:55,980 INFO [RS:0;jenkins-hbase17:33397] regionserver.Replication(203): jenkins-hbase17.apache.org,33397,1685541355691 started 2023-05-31 13:55:55,980 INFO [RS:0;jenkins-hbase17:33397] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,33397,1685541355691, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:33397, sessionid=0x10081857b880001 2023-05-31 13:55:55,980 DEBUG [RS:0;jenkins-hbase17:33397] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-31 13:55:55,980 DEBUG [RS:0;jenkins-hbase17:33397] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:55:55,980 DEBUG [RS:0;jenkins-hbase17:33397] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,33397,1685541355691' 2023-05-31 13:55:55,980 DEBUG [RS:0;jenkins-hbase17:33397] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 13:55:55,981 DEBUG [RS:0;jenkins-hbase17:33397] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 13:55:55,981 DEBUG [RS:0;jenkins-hbase17:33397] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-31 13:55:55,981 DEBUG [RS:0;jenkins-hbase17:33397] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-31 13:55:55,981 DEBUG [RS:0;jenkins-hbase17:33397] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:55:55,981 DEBUG [RS:0;jenkins-hbase17:33397] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,33397,1685541355691' 2023-05-31 13:55:55,981 DEBUG [RS:0;jenkins-hbase17:33397] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-31 13:55:55,982 DEBUG [RS:0;jenkins-hbase17:33397] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-31 13:55:55,982 DEBUG [RS:0;jenkins-hbase17:33397] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-31 13:55:55,982 INFO [RS:0;jenkins-hbase17:33397] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-31 13:55:55,982 INFO [RS:0;jenkins-hbase17:33397] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-31 13:55:56,034 DEBUG [jenkins-hbase17:38103] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-31 13:55:56,035 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,33397,1685541355691, state=OPENING 2023-05-31 13:55:56,036 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-31 13:55:56,037 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:55:56,038 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 13:55:56,038 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,33397,1685541355691}] 2023-05-31 13:55:56,085 INFO [RS:0;jenkins-hbase17:33397] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C33397%2C1685541355691, suffix=, logDir=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/WALs/jenkins-hbase17.apache.org,33397,1685541355691, archiveDir=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/oldWALs, maxLogs=32 2023-05-31 13:55:56,095 INFO [RS:0;jenkins-hbase17:33397] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/WALs/jenkins-hbase17.apache.org,33397,1685541355691/jenkins-hbase17.apache.org%2C33397%2C1685541355691.1685541356085 2023-05-31 13:55:56,095 DEBUG [RS:0;jenkins-hbase17:33397] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40419,DS-749fe520-e75e-405a-a5ea-133f865916e2,DISK], DatanodeInfoWithStorage[127.0.0.1:42763,DS-bffb245e-8987-4142-9f54-74754d3b92d2,DISK]] 2023-05-31 13:55:56,192 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:55:56,192 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-31 13:55:56,194 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:42946, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-31 13:55:56,198 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-31 13:55:56,198 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 13:55:56,200 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C33397%2C1685541355691.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/WALs/jenkins-hbase17.apache.org,33397,1685541355691, archiveDir=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/oldWALs, maxLogs=32 2023-05-31 13:55:56,210 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/WALs/jenkins-hbase17.apache.org,33397,1685541355691/jenkins-hbase17.apache.org%2C33397%2C1685541355691.meta.1685541356200.meta 2023-05-31 13:55:56,210 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40419,DS-749fe520-e75e-405a-a5ea-133f865916e2,DISK], DatanodeInfoWithStorage[127.0.0.1:42763,DS-bffb245e-8987-4142-9f54-74754d3b92d2,DISK]] 2023-05-31 13:55:56,210 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-31 13:55:56,211 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-31 13:55:56,211 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-31 13:55:56,211 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-31 13:55:56,211 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-31 13:55:56,211 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:55:56,211 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-31 13:55:56,211 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-31 13:55:56,213 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 13:55:56,214 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/1588230740/info 2023-05-31 13:55:56,214 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/1588230740/info 2023-05-31 13:55:56,214 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 13:55:56,215 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:55:56,215 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 13:55:56,216 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/1588230740/rep_barrier 2023-05-31 13:55:56,216 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/1588230740/rep_barrier 2023-05-31 13:55:56,217 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 13:55:56,217 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:55:56,217 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 13:55:56,218 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/1588230740/table 2023-05-31 13:55:56,218 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/1588230740/table 2023-05-31 13:55:56,219 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 13:55:56,219 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:55:56,220 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/1588230740 2023-05-31 13:55:56,221 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/1588230740 2023-05-31 13:55:56,224 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 13:55:56,225 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 13:55:56,226 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=864851, jitterRate=0.09971585869789124}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 13:55:56,226 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 13:55:56,230 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685541356192 2023-05-31 13:55:56,236 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-31 13:55:56,237 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-31 13:55:56,238 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,33397,1685541355691, state=OPEN 2023-05-31 13:55:56,239 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-31 13:55:56,239 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 13:55:56,242 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-31 13:55:56,242 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,33397,1685541355691 in 201 msec 2023-05-31 13:55:56,245 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-31 13:55:56,245 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 361 msec 2023-05-31 13:55:56,247 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 416 msec 2023-05-31 13:55:56,247 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685541356247, completionTime=-1 2023-05-31 13:55:56,247 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-31 13:55:56,247 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-31 13:55:56,252 DEBUG [hconnection-0x44866dfc-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 13:55:56,254 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:42950, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 13:55:56,256 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-31 13:55:56,256 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685541416256 2023-05-31 13:55:56,256 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685541476256 2023-05-31 13:55:56,256 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 8 msec 2023-05-31 13:55:56,262 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38103,1685541355650-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:56,262 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38103,1685541355650-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:56,262 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38103,1685541355650-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:56,262 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:38103, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:56,262 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-31 13:55:56,262 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-31 13:55:56,263 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 13:55:56,264 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-31 13:55:56,264 DEBUG [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-31 13:55:56,266 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 13:55:56,267 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 13:55:56,270 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/.tmp/data/hbase/namespace/58ad17134910443a13bdeacc96421ddb 2023-05-31 13:55:56,271 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/.tmp/data/hbase/namespace/58ad17134910443a13bdeacc96421ddb empty. 2023-05-31 13:55:56,271 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/.tmp/data/hbase/namespace/58ad17134910443a13bdeacc96421ddb 2023-05-31 13:55:56,271 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-31 13:55:56,288 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-31 13:55:56,289 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 58ad17134910443a13bdeacc96421ddb, NAME => 'hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/.tmp 2023-05-31 13:55:56,302 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:55:56,302 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 58ad17134910443a13bdeacc96421ddb, disabling compactions & flushes 2023-05-31 13:55:56,302 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb. 2023-05-31 13:55:56,302 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb. 2023-05-31 13:55:56,302 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb. after waiting 0 ms 2023-05-31 13:55:56,302 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb. 2023-05-31 13:55:56,302 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb. 2023-05-31 13:55:56,302 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 58ad17134910443a13bdeacc96421ddb: 2023-05-31 13:55:56,305 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 13:55:56,306 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685541356306"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685541356306"}]},"ts":"1685541356306"} 2023-05-31 13:55:56,309 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 13:55:56,310 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 13:55:56,310 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685541356310"}]},"ts":"1685541356310"} 2023-05-31 13:55:56,312 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-31 13:55:56,315 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=58ad17134910443a13bdeacc96421ddb, ASSIGN}] 2023-05-31 13:55:56,318 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=58ad17134910443a13bdeacc96421ddb, ASSIGN 2023-05-31 13:55:56,319 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=58ad17134910443a13bdeacc96421ddb, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,33397,1685541355691; forceNewPlan=false, retain=false 2023-05-31 13:55:56,470 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=58ad17134910443a13bdeacc96421ddb, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:55:56,471 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685541356470"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685541356470"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685541356470"}]},"ts":"1685541356470"} 2023-05-31 13:55:56,473 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 58ad17134910443a13bdeacc96421ddb, server=jenkins-hbase17.apache.org,33397,1685541355691}] 2023-05-31 13:55:56,630 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb. 2023-05-31 13:55:56,630 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 58ad17134910443a13bdeacc96421ddb, NAME => 'hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb.', STARTKEY => '', ENDKEY => ''} 2023-05-31 13:55:56,630 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 58ad17134910443a13bdeacc96421ddb 2023-05-31 13:55:56,630 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:55:56,631 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 58ad17134910443a13bdeacc96421ddb 2023-05-31 13:55:56,631 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 58ad17134910443a13bdeacc96421ddb 2023-05-31 13:55:56,632 INFO [StoreOpener-58ad17134910443a13bdeacc96421ddb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 58ad17134910443a13bdeacc96421ddb 2023-05-31 13:55:56,633 DEBUG [StoreOpener-58ad17134910443a13bdeacc96421ddb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/namespace/58ad17134910443a13bdeacc96421ddb/info 2023-05-31 13:55:56,633 DEBUG [StoreOpener-58ad17134910443a13bdeacc96421ddb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/namespace/58ad17134910443a13bdeacc96421ddb/info 2023-05-31 13:55:56,634 INFO [StoreOpener-58ad17134910443a13bdeacc96421ddb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 58ad17134910443a13bdeacc96421ddb columnFamilyName info 2023-05-31 13:55:56,634 INFO [StoreOpener-58ad17134910443a13bdeacc96421ddb-1] regionserver.HStore(310): Store=58ad17134910443a13bdeacc96421ddb/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:55:56,635 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/namespace/58ad17134910443a13bdeacc96421ddb 2023-05-31 13:55:56,636 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/namespace/58ad17134910443a13bdeacc96421ddb 2023-05-31 13:55:56,639 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 58ad17134910443a13bdeacc96421ddb 2023-05-31 13:55:56,641 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/namespace/58ad17134910443a13bdeacc96421ddb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 13:55:56,641 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 58ad17134910443a13bdeacc96421ddb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=860714, jitterRate=0.09445539116859436}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 13:55:56,641 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 58ad17134910443a13bdeacc96421ddb: 2023-05-31 13:55:56,643 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb., pid=6, masterSystemTime=1685541356626 2023-05-31 13:55:56,645 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb. 2023-05-31 13:55:56,645 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb. 2023-05-31 13:55:56,646 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=58ad17134910443a13bdeacc96421ddb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:55:56,646 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685541356645"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685541356645"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685541356645"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685541356645"}]},"ts":"1685541356645"} 2023-05-31 13:55:56,649 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-31 13:55:56,649 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 58ad17134910443a13bdeacc96421ddb, server=jenkins-hbase17.apache.org,33397,1685541355691 in 174 msec 2023-05-31 13:55:56,651 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-31 13:55:56,651 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=58ad17134910443a13bdeacc96421ddb, ASSIGN in 334 msec 2023-05-31 13:55:56,651 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 13:55:56,651 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685541356651"}]},"ts":"1685541356651"} 2023-05-31 13:55:56,653 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-31 13:55:56,655 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 13:55:56,657 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 392 msec 2023-05-31 13:55:56,665 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-31 13:55:56,666 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-31 13:55:56,666 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:55:56,670 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-31 13:55:56,678 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 13:55:56,681 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 11 msec 2023-05-31 13:55:56,693 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-31 13:55:56,701 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 13:55:56,705 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 11 msec 2023-05-31 13:55:56,716 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-31 13:55:56,717 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-31 13:55:56,717 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.997sec 2023-05-31 13:55:56,717 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-31 13:55:56,717 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-31 13:55:56,718 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-31 13:55:56,718 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38103,1685541355650-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-31 13:55:56,718 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38103,1685541355650-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-31 13:55:56,719 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-31 13:55:56,815 DEBUG [Listener at localhost.localdomain/41065] zookeeper.ReadOnlyZKClient(139): Connect 0x3a97899d to 127.0.0.1:62916 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 13:55:56,819 DEBUG [Listener at localhost.localdomain/41065] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@e269e58, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 13:55:56,821 DEBUG [hconnection-0x2e8dae24-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 13:55:56,823 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:42958, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 13:55:56,824 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase17.apache.org,38103,1685541355650 2023-05-31 13:55:56,825 INFO [Listener at localhost.localdomain/41065] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:55:56,827 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-31 13:55:56,827 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:55:56,828 INFO [Listener at localhost.localdomain/41065] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-31 13:55:56,830 DEBUG [Listener at localhost.localdomain/41065] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-31 13:55:56,833 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:55518, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-31 13:55:56,834 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-31 13:55:56,835 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-31 13:55:56,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 13:55:56,836 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:55:56,838 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 13:55:56,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] master.MasterRpcServices(697): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testCompactionRecordDoesntBlockRolling" procId is: 9 2023-05-31 13:55:56,839 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 13:55:56,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 13:55:56,841 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8 2023-05-31 13:55:56,841 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8 empty. 2023-05-31 13:55:56,842 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8 2023-05-31 13:55:56,842 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testCompactionRecordDoesntBlockRolling regions 2023-05-31 13:55:56,858 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/.tabledesc/.tableinfo.0000000001 2023-05-31 13:55:56,859 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => d67df2d2424cef580700a5d4375c9ef8, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/.tmp 2023-05-31 13:55:56,867 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:55:56,867 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1604): Closing d67df2d2424cef580700a5d4375c9ef8, disabling compactions & flushes 2023-05-31 13:55:56,868 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. 2023-05-31 13:55:56,868 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. 2023-05-31 13:55:56,868 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. after waiting 0 ms 2023-05-31 13:55:56,868 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. 2023-05-31 13:55:56,868 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. 2023-05-31 13:55:56,868 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1558): Region close journal for d67df2d2424cef580700a5d4375c9ef8: 2023-05-31 13:55:56,870 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 13:55:56,871 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685541356871"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685541356871"}]},"ts":"1685541356871"} 2023-05-31 13:55:56,873 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 13:55:56,874 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 13:55:56,874 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685541356874"}]},"ts":"1685541356874"} 2023-05-31 13:55:56,875 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLING in hbase:meta 2023-05-31 13:55:56,879 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=d67df2d2424cef580700a5d4375c9ef8, ASSIGN}] 2023-05-31 13:55:56,881 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=d67df2d2424cef580700a5d4375c9ef8, ASSIGN 2023-05-31 13:55:56,882 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=d67df2d2424cef580700a5d4375c9ef8, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,33397,1685541355691; forceNewPlan=false, retain=false 2023-05-31 13:55:57,033 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=d67df2d2424cef580700a5d4375c9ef8, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:55:57,033 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685541357033"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685541357033"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685541357033"}]},"ts":"1685541357033"} 2023-05-31 13:55:57,036 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure d67df2d2424cef580700a5d4375c9ef8, server=jenkins-hbase17.apache.org,33397,1685541355691}] 2023-05-31 13:55:57,192 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. 2023-05-31 13:55:57,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d67df2d2424cef580700a5d4375c9ef8, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8.', STARTKEY => '', ENDKEY => ''} 2023-05-31 13:55:57,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testCompactionRecordDoesntBlockRolling d67df2d2424cef580700a5d4375c9ef8 2023-05-31 13:55:57,193 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:55:57,193 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for d67df2d2424cef580700a5d4375c9ef8 2023-05-31 13:55:57,193 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for d67df2d2424cef580700a5d4375c9ef8 2023-05-31 13:55:57,194 INFO [StoreOpener-d67df2d2424cef580700a5d4375c9ef8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region d67df2d2424cef580700a5d4375c9ef8 2023-05-31 13:55:57,196 DEBUG [StoreOpener-d67df2d2424cef580700a5d4375c9ef8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info 2023-05-31 13:55:57,196 DEBUG [StoreOpener-d67df2d2424cef580700a5d4375c9ef8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info 2023-05-31 13:55:57,196 INFO [StoreOpener-d67df2d2424cef580700a5d4375c9ef8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d67df2d2424cef580700a5d4375c9ef8 columnFamilyName info 2023-05-31 13:55:57,197 INFO [StoreOpener-d67df2d2424cef580700a5d4375c9ef8-1] regionserver.HStore(310): Store=d67df2d2424cef580700a5d4375c9ef8/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:55:57,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8 2023-05-31 13:55:57,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8 2023-05-31 13:55:57,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for d67df2d2424cef580700a5d4375c9ef8 2023-05-31 13:55:57,203 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 13:55:57,204 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened d67df2d2424cef580700a5d4375c9ef8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=764606, jitterRate=-0.027753770351409912}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 13:55:57,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for d67df2d2424cef580700a5d4375c9ef8: 2023-05-31 13:55:57,205 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8., pid=11, masterSystemTime=1685541357188 2023-05-31 13:55:57,207 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. 2023-05-31 13:55:57,207 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. 2023-05-31 13:55:57,208 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=d67df2d2424cef580700a5d4375c9ef8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:55:57,208 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685541357207"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685541357207"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685541357207"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685541357207"}]},"ts":"1685541357207"} 2023-05-31 13:55:57,213 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-31 13:55:57,213 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure d67df2d2424cef580700a5d4375c9ef8, server=jenkins-hbase17.apache.org,33397,1685541355691 in 174 msec 2023-05-31 13:55:57,216 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-31 13:55:57,216 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=d67df2d2424cef580700a5d4375c9ef8, ASSIGN in 334 msec 2023-05-31 13:55:57,217 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 13:55:57,217 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685541357217"}]},"ts":"1685541357217"} 2023-05-31 13:55:57,219 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLED in hbase:meta 2023-05-31 13:55:57,222 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 13:55:57,224 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling in 387 msec 2023-05-31 13:55:59,741 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-31 13:56:01,950 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-31 13:56:01,951 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-31 13:56:01,951 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 13:56:06,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 13:56:06,841 INFO [Listener at localhost.localdomain/41065] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testCompactionRecordDoesntBlockRolling, procId: 9 completed 2023-05-31 13:56:06,844 DEBUG [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:06,844 DEBUG [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. 2023-05-31 13:56:06,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] master.MasterRpcServices(933): Client=jenkins//136.243.18.41 procedure request for: flush-table-proc 2023-05-31 13:56:06,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] procedure.ProcedureCoordinator(165): Submitting procedure hbase:namespace 2023-05-31 13:56:06,865 INFO [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'hbase:namespace' 2023-05-31 13:56:06,865 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 13:56:06,865 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'hbase:namespace' starting 'acquire' 2023-05-31 13:56:06,865 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'hbase:namespace', kicking off acquire phase on members. 2023-05-31 13:56:06,866 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 13:56:06,866 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-31 13:56:06,867 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 13:56:06,867 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:06,867 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 13:56:06,867 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 13:56:06,867 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:06,867 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-31 13:56:06,867 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/hbase:namespace 2023-05-31 13:56:06,868 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 13:56:06,868 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-31 13:56:06,868 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-31 13:56:06,869 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for hbase:namespace 2023-05-31 13:56:06,871 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:hbase:namespace 2023-05-31 13:56:06,871 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'hbase:namespace' with timeout 60000ms 2023-05-31 13:56:06,871 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 13:56:06,872 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'hbase:namespace' starting 'acquire' stage 2023-05-31 13:56:06,872 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-31 13:56:06,872 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-31 13:56:06,872 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb. 2023-05-31 13:56:06,873 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb. started... 2023-05-31 13:56:06,873 INFO [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 58ad17134910443a13bdeacc96421ddb 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-31 13:56:06,884 INFO [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/namespace/58ad17134910443a13bdeacc96421ddb/.tmp/info/df9e1672d3474477852826b781da03dd 2023-05-31 13:56:06,895 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/namespace/58ad17134910443a13bdeacc96421ddb/.tmp/info/df9e1672d3474477852826b781da03dd as hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/namespace/58ad17134910443a13bdeacc96421ddb/info/df9e1672d3474477852826b781da03dd 2023-05-31 13:56:06,902 INFO [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/namespace/58ad17134910443a13bdeacc96421ddb/info/df9e1672d3474477852826b781da03dd, entries=2, sequenceid=6, filesize=4.8 K 2023-05-31 13:56:06,903 INFO [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 58ad17134910443a13bdeacc96421ddb in 30ms, sequenceid=6, compaction requested=false 2023-05-31 13:56:06,904 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 58ad17134910443a13bdeacc96421ddb: 2023-05-31 13:56:06,904 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb. 2023-05-31 13:56:06,904 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-31 13:56:06,904 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-31 13:56:06,904 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:06,904 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'hbase:namespace' locally acquired 2023-05-31 13:56:06,904 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase17.apache.org,33397,1685541355691' joining acquired barrier for procedure (hbase:namespace) in zk 2023-05-31 13:56:06,906 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:06,906 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-31 13:56:06,906 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:06,906 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 13:56:06,906 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 13:56:06,906 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace 2023-05-31 13:56:06,906 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'hbase:namespace' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-31 13:56:06,906 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 13:56:06,907 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 13:56:06,907 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-31 13:56:06,907 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:06,907 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 13:56:06,908 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase17.apache.org,33397,1685541355691' joining acquired barrier for procedure 'hbase:namespace' on coordinator 2023-05-31 13:56:06,908 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'hbase:namespace' starting 'in-barrier' execution. 2023-05-31 13:56:06,908 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@683feb3e[Count = 0] remaining members to acquire global barrier 2023-05-31 13:56:06,908 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-31 13:56:06,910 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-31 13:56:06,910 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-31 13:56:06,910 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-31 13:56:06,910 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'hbase:namespace' received 'reached' from coordinator. 2023-05-31 13:56:06,910 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'hbase:namespace' locally completed 2023-05-31 13:56:06,910 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:06,910 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'hbase:namespace' completed for member 'jenkins-hbase17.apache.org,33397,1685541355691' in zk 2023-05-31 13:56:06,910 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-31 13:56:06,911 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:06,911 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'hbase:namespace' has notified controller of completion 2023-05-31 13:56:06,912 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:06,912 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 13:56:06,912 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 13:56:06,912 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 13:56:06,912 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'hbase:namespace' completed. 2023-05-31 13:56:06,913 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 13:56:06,913 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 13:56:06,913 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-31 13:56:06,914 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:06,914 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 13:56:06,914 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-31 13:56:06,914 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:06,915 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'hbase:namespace' member 'jenkins-hbase17.apache.org,33397,1685541355691': 2023-05-31 13:56:06,915 INFO [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'hbase:namespace' execution completed 2023-05-31 13:56:06,915 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-31 13:56:06,915 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase17.apache.org,33397,1685541355691' released barrier for procedure'hbase:namespace', counting down latch. Waiting for 0 more 2023-05-31 13:56:06,915 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-31 13:56:06,915 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:hbase:namespace 2023-05-31 13:56:06,915 INFO [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure hbase:namespaceincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-31 13:56:06,917 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 13:56:06,917 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 13:56:06,917 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 13:56:06,917 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 13:56:06,917 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 13:56:06,917 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 13:56:06,917 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 13:56:06,917 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 13:56:06,918 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 13:56:06,918 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:06,918 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 13:56:06,918 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 13:56:06,918 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-31 13:56:06,918 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 13:56:06,918 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 13:56:06,919 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-31 13:56:06,919 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:06,919 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:06,919 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 13:56:06,919 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-31 13:56:06,919 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:06,926 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:06,926 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 13:56:06,926 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-31 13:56:06,926 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 13:56:06,926 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-31 13:56:06,926 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 13:56:06,926 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 13:56:06,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'hbase:namespace' 2023-05-31 13:56:06,926 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:06,926 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 13:56:06,927 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-31 13:56:06,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-31 13:56:06,927 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-31 13:56:06,927 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 13:56:06,928 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 13:56:06,928 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 13:56:06,929 DEBUG [Listener at localhost.localdomain/41065] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : hbase:namespace'' to complete. (max 20000 ms per retry) 2023-05-31 13:56:06,930 DEBUG [Listener at localhost.localdomain/41065] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-31 13:56:16,930 DEBUG [Listener at localhost.localdomain/41065] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-31 13:56:16,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-31 13:56:16,946 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] master.MasterRpcServices(933): Client=jenkins//136.243.18.41 procedure request for: flush-table-proc 2023-05-31 13:56:16,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,949 INFO [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 13:56:16,949 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 13:56:16,949 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-31 13:56:16,949 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-31 13:56:16,950 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,950 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,951 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:16,951 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 13:56:16,951 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 13:56:16,951 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 13:56:16,951 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:16,951 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-31 13:56:16,951 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,951 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,952 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-31 13:56:16,952 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,952 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,952 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,952 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-31 13:56:16,952 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 13:56:16,953 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-31 13:56:16,953 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-31 13:56:16,953 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-31 13:56:16,953 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. 2023-05-31 13:56:16,953 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. started... 2023-05-31 13:56:16,953 INFO [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing d67df2d2424cef580700a5d4375c9ef8 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-31 13:56:16,965 INFO [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=5 (bloomFilter=true), to=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/.tmp/info/0317e73bcd324d3fa02f3f6e7b7f7c83 2023-05-31 13:56:16,972 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/.tmp/info/0317e73bcd324d3fa02f3f6e7b7f7c83 as hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info/0317e73bcd324d3fa02f3f6e7b7f7c83 2023-05-31 13:56:16,981 INFO [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info/0317e73bcd324d3fa02f3f6e7b7f7c83, entries=1, sequenceid=5, filesize=5.8 K 2023-05-31 13:56:16,982 INFO [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for d67df2d2424cef580700a5d4375c9ef8 in 29ms, sequenceid=5, compaction requested=false 2023-05-31 13:56:16,982 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for d67df2d2424cef580700a5d4375c9ef8: 2023-05-31 13:56:16,983 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. 2023-05-31 13:56:16,983 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-31 13:56:16,983 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-31 13:56:16,983 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:16,983 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-31 13:56:16,983 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase17.apache.org,33397,1685541355691' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-31 13:56:16,985 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,985 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:16,985 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:16,985 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 13:56:16,985 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 13:56:16,985 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,985 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-31 13:56:16,985 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 13:56:16,986 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 13:56:16,986 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,986 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:16,986 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 13:56:16,987 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase17.apache.org,33397,1685541355691' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-31 13:56:16,987 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@6c0062e1[Count = 0] remaining members to acquire global barrier 2023-05-31 13:56:16,987 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-31 13:56:16,987 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,987 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,987 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,987 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,987 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-31 13:56:16,988 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:16,988 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-31 13:56:16,988 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-31 13:56:16,988 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase17.apache.org,33397,1685541355691' in zk 2023-05-31 13:56:16,989 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:16,989 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-31 13:56:16,989 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:16,989 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 13:56:16,989 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 13:56:16,989 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 13:56:16,989 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-31 13:56:16,990 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 13:56:16,990 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 13:56:16,991 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,991 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:16,991 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 13:56:16,991 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,992 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:16,992 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase17.apache.org,33397,1685541355691': 2023-05-31 13:56:16,992 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase17.apache.org,33397,1685541355691' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-31 13:56:16,992 INFO [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-31 13:56:16,992 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-31 13:56:16,992 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-31 13:56:16,992 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,992 INFO [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-31 13:56:16,994 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,994 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 13:56:16,994 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,994 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,994 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,994 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,994 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 13:56:16,994 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 13:56:16,995 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 13:56:16,995 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:16,995 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 13:56:16,995 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 13:56:16,995 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,995 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,995 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 13:56:16,996 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,996 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:16,996 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:16,996 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 13:56:16,996 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,997 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:16,999 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:16,999 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 13:56:16,999 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:16,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 13:56:16,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-31 13:56:16,999 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 13:56:16,999 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 13:56:16,999 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 13:56:16,999 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 13:56:16,999 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:17,000 DEBUG [Listener at localhost.localdomain/41065] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-31 13:56:17,000 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:17,000 DEBUG [Listener at localhost.localdomain/41065] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-31 13:56:17,000 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 13:56:17,000 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:17,000 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 13:56:17,000 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:17,000 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,000 DEBUG [Listener at localhost.localdomain/41065] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-31 13:56:27,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-31 13:56:27,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] master.MasterRpcServices(933): Client=jenkins//136.243.18.41 procedure request for: flush-table-proc 2023-05-31 13:56:27,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-31 13:56:27,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,019 INFO [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 13:56:27,020 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 13:56:27,020 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-31 13:56:27,020 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-31 13:56:27,021 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,021 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,022 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 13:56:27,022 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:27,022 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 13:56:27,022 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 13:56:27,022 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:27,022 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-31 13:56:27,022 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,023 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,023 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-31 13:56:27,023 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,023 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,023 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-31 13:56:27,023 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,023 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-31 13:56:27,024 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 13:56:27,024 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-31 13:56:27,024 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-31 13:56:27,024 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-31 13:56:27,024 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. 2023-05-31 13:56:27,024 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. started... 2023-05-31 13:56:27,024 INFO [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing d67df2d2424cef580700a5d4375c9ef8 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-31 13:56:27,035 INFO [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/.tmp/info/804811e6dd1c4350942144db19591b18 2023-05-31 13:56:27,043 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/.tmp/info/804811e6dd1c4350942144db19591b18 as hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info/804811e6dd1c4350942144db19591b18 2023-05-31 13:56:27,050 INFO [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info/804811e6dd1c4350942144db19591b18, entries=1, sequenceid=9, filesize=5.8 K 2023-05-31 13:56:27,051 INFO [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for d67df2d2424cef580700a5d4375c9ef8 in 27ms, sequenceid=9, compaction requested=false 2023-05-31 13:56:27,051 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for d67df2d2424cef580700a5d4375c9ef8: 2023-05-31 13:56:27,051 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. 2023-05-31 13:56:27,051 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-31 13:56:27,051 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-31 13:56:27,051 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:27,051 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-31 13:56:27,051 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase17.apache.org,33397,1685541355691' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-31 13:56:27,053 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,053 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:27,053 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:27,053 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 13:56:27,053 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 13:56:27,053 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,053 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-31 13:56:27,054 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 13:56:27,054 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 13:56:27,054 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,054 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:27,055 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 13:56:27,055 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase17.apache.org,33397,1685541355691' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-31 13:56:27,055 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@2902e105[Count = 0] remaining members to acquire global barrier 2023-05-31 13:56:27,055 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-31 13:56:27,055 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,056 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,056 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,056 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,056 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-31 13:56:27,056 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-31 13:56:27,056 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase17.apache.org,33397,1685541355691' in zk 2023-05-31 13:56:27,056 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:27,056 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-31 13:56:27,057 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-31 13:56:27,057 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:27,057 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 13:56:27,057 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:27,057 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 13:56:27,058 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 13:56:27,057 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-31 13:56:27,058 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 13:56:27,058 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 13:56:27,058 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,059 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:27,059 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 13:56:27,059 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,059 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:27,060 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase17.apache.org,33397,1685541355691': 2023-05-31 13:56:27,060 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase17.apache.org,33397,1685541355691' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-31 13:56:27,060 INFO [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-31 13:56:27,060 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-31 13:56:27,060 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-31 13:56:27,060 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,060 INFO [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-31 13:56:27,061 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,061 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,061 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,061 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,061 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,061 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 13:56:27,061 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 13:56:27,061 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 13:56:27,061 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:27,061 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 13:56:27,062 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 13:56:27,062 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 13:56:27,062 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,062 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,062 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 13:56:27,062 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:27,062 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,063 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:27,063 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 13:56:27,063 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,064 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:27,066 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:27,066 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,066 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 13:56:27,066 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,066 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 13:56:27,066 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 13:56:27,066 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 13:56:27,066 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 13:56:27,066 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:27,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 13:56:27,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-31 13:56:27,067 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-31 13:56:27,067 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,067 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,067 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:27,067 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 13:56:27,067 DEBUG [Listener at localhost.localdomain/41065] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-31 13:56:27,067 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 13:56:27,067 DEBUG [Listener at localhost.localdomain/41065] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-31 13:56:37,067 DEBUG [Listener at localhost.localdomain/41065] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-31 13:56:37,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-31 13:56:37,090 INFO [Listener at localhost.localdomain/41065] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/WALs/jenkins-hbase17.apache.org,33397,1685541355691/jenkins-hbase17.apache.org%2C33397%2C1685541355691.1685541356085 with entries=13, filesize=6.44 KB; new WAL /user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/WALs/jenkins-hbase17.apache.org,33397,1685541355691/jenkins-hbase17.apache.org%2C33397%2C1685541355691.1685541397075 2023-05-31 13:56:37,091 DEBUG [Listener at localhost.localdomain/41065] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42763,DS-bffb245e-8987-4142-9f54-74754d3b92d2,DISK], DatanodeInfoWithStorage[127.0.0.1:40419,DS-749fe520-e75e-405a-a5ea-133f865916e2,DISK]] 2023-05-31 13:56:37,091 DEBUG [Listener at localhost.localdomain/41065] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/WALs/jenkins-hbase17.apache.org,33397,1685541355691/jenkins-hbase17.apache.org%2C33397%2C1685541355691.1685541356085 is not closed yet, will try archiving it next time 2023-05-31 13:56:37,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] master.MasterRpcServices(933): Client=jenkins//136.243.18.41 procedure request for: flush-table-proc 2023-05-31 13:56:37,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-31 13:56:37,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,099 INFO [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 13:56:37,099 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 13:56:37,100 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-31 13:56:37,100 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-31 13:56:37,101 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,101 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,102 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:37,102 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 13:56:37,102 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 13:56:37,102 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 13:56:37,102 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:37,102 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-31 13:56:37,102 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,102 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,103 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-31 13:56:37,103 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,103 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,103 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-31 13:56:37,103 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,103 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-31 13:56:37,103 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 13:56:37,104 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-31 13:56:37,104 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-31 13:56:37,104 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-31 13:56:37,104 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. 2023-05-31 13:56:37,104 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. started... 2023-05-31 13:56:37,104 INFO [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing d67df2d2424cef580700a5d4375c9ef8 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-31 13:56:37,117 INFO [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=13 (bloomFilter=true), to=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/.tmp/info/8172231710f5405f8a3c6fc290fcdd04 2023-05-31 13:56:37,125 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/.tmp/info/8172231710f5405f8a3c6fc290fcdd04 as hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info/8172231710f5405f8a3c6fc290fcdd04 2023-05-31 13:56:37,132 INFO [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info/8172231710f5405f8a3c6fc290fcdd04, entries=1, sequenceid=13, filesize=5.8 K 2023-05-31 13:56:37,135 INFO [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for d67df2d2424cef580700a5d4375c9ef8 in 31ms, sequenceid=13, compaction requested=true 2023-05-31 13:56:37,135 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for d67df2d2424cef580700a5d4375c9ef8: 2023-05-31 13:56:37,135 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. 2023-05-31 13:56:37,135 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-31 13:56:37,135 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-31 13:56:37,135 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:37,135 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-31 13:56:37,135 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase17.apache.org,33397,1685541355691' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-31 13:56:37,137 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,137 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:37,137 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:37,137 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 13:56:37,137 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 13:56:37,137 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,137 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-31 13:56:37,138 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 13:56:37,138 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 13:56:37,138 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,139 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:37,139 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 13:56:37,140 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase17.apache.org,33397,1685541355691' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-31 13:56:37,140 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@f90d135[Count = 0] remaining members to acquire global barrier 2023-05-31 13:56:37,140 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-31 13:56:37,140 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,140 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,141 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,141 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,141 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-31 13:56:37,141 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-31 13:56:37,141 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase17.apache.org,33397,1685541355691' in zk 2023-05-31 13:56:37,141 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:37,141 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-31 13:56:37,142 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:37,142 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-31 13:56:37,143 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 13:56:37,142 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:37,143 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 13:56:37,143 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 13:56:37,143 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-31 13:56:37,144 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 13:56:37,144 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 13:56:37,144 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,144 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:37,145 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 13:56:37,145 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,145 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:37,146 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase17.apache.org,33397,1685541355691': 2023-05-31 13:56:37,146 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase17.apache.org,33397,1685541355691' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-31 13:56:37,146 INFO [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-31 13:56:37,146 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-31 13:56:37,146 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-31 13:56:37,146 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,146 INFO [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-31 13:56:37,147 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,147 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,147 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,148 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,148 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 13:56:37,148 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 13:56:37,147 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 13:56:37,148 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,148 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:37,148 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 13:56:37,148 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 13:56:37,148 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 13:56:37,148 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,149 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,149 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 13:56:37,149 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,149 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:37,150 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:37,150 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 13:56:37,150 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,151 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:37,152 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:37,153 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 13:56:37,153 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 13:56:37,153 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 13:56:37,153 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 13:56:37,153 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 13:56:37,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-31 13:56:37,153 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 13:56:37,153 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-31 13:56:37,153 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,154 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 13:56:37,154 DEBUG [Listener at localhost.localdomain/41065] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-31 13:56:37,154 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:37,154 DEBUG [Listener at localhost.localdomain/41065] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-31 13:56:37,154 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 13:56:37,154 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,154 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:37,154 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,154 DEBUG [Listener at localhost.localdomain/41065] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-31 13:56:47,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-31 13:56:47,157 DEBUG [Listener at localhost.localdomain/41065] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 13:56:47,165 DEBUG [Listener at localhost.localdomain/41065] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 17769 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 13:56:47,166 DEBUG [Listener at localhost.localdomain/41065] regionserver.HStore(1912): d67df2d2424cef580700a5d4375c9ef8/info is initiating minor compaction (all files) 2023-05-31 13:56:47,166 INFO [Listener at localhost.localdomain/41065] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-31 13:56:47,166 INFO [Listener at localhost.localdomain/41065] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 13:56:47,166 INFO [Listener at localhost.localdomain/41065] regionserver.HRegion(2259): Starting compaction of d67df2d2424cef580700a5d4375c9ef8/info in TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. 2023-05-31 13:56:47,166 INFO [Listener at localhost.localdomain/41065] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info/0317e73bcd324d3fa02f3f6e7b7f7c83, hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info/804811e6dd1c4350942144db19591b18, hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info/8172231710f5405f8a3c6fc290fcdd04] into tmpdir=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/.tmp, totalSize=17.4 K 2023-05-31 13:56:47,167 DEBUG [Listener at localhost.localdomain/41065] compactions.Compactor(207): Compacting 0317e73bcd324d3fa02f3f6e7b7f7c83, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=5, earliestPutTs=1685541376940 2023-05-31 13:56:47,168 DEBUG [Listener at localhost.localdomain/41065] compactions.Compactor(207): Compacting 804811e6dd1c4350942144db19591b18, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=9, earliestPutTs=1685541387006 2023-05-31 13:56:47,168 DEBUG [Listener at localhost.localdomain/41065] compactions.Compactor(207): Compacting 8172231710f5405f8a3c6fc290fcdd04, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=13, earliestPutTs=1685541397070 2023-05-31 13:56:47,184 INFO [Listener at localhost.localdomain/41065] throttle.PressureAwareThroughputController(145): d67df2d2424cef580700a5d4375c9ef8#info#compaction#19 average throughput is 3.08 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 13:56:47,200 DEBUG [Listener at localhost.localdomain/41065] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/.tmp/info/29b6091750884c81a2f78c565629eea9 as hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info/29b6091750884c81a2f78c565629eea9 2023-05-31 13:56:47,207 INFO [Listener at localhost.localdomain/41065] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in d67df2d2424cef580700a5d4375c9ef8/info of d67df2d2424cef580700a5d4375c9ef8 into 29b6091750884c81a2f78c565629eea9(size=8.0 K), total size for store is 8.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 13:56:47,207 DEBUG [Listener at localhost.localdomain/41065] regionserver.HRegion(2289): Compaction status journal for d67df2d2424cef580700a5d4375c9ef8: 2023-05-31 13:56:47,228 INFO [Listener at localhost.localdomain/41065] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/WALs/jenkins-hbase17.apache.org,33397,1685541355691/jenkins-hbase17.apache.org%2C33397%2C1685541355691.1685541397075 with entries=4, filesize=2.45 KB; new WAL /user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/WALs/jenkins-hbase17.apache.org,33397,1685541355691/jenkins-hbase17.apache.org%2C33397%2C1685541355691.1685541407209 2023-05-31 13:56:47,228 DEBUG [Listener at localhost.localdomain/41065] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42763,DS-bffb245e-8987-4142-9f54-74754d3b92d2,DISK], DatanodeInfoWithStorage[127.0.0.1:40419,DS-749fe520-e75e-405a-a5ea-133f865916e2,DISK]] 2023-05-31 13:56:47,228 DEBUG [Listener at localhost.localdomain/41065] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/WALs/jenkins-hbase17.apache.org,33397,1685541355691/jenkins-hbase17.apache.org%2C33397%2C1685541355691.1685541397075 is not closed yet, will try archiving it next time 2023-05-31 13:56:47,233 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/WALs/jenkins-hbase17.apache.org,33397,1685541355691/jenkins-hbase17.apache.org%2C33397%2C1685541355691.1685541356085 to hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/oldWALs/jenkins-hbase17.apache.org%2C33397%2C1685541355691.1685541356085 2023-05-31 13:56:47,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] master.MasterRpcServices(933): Client=jenkins//136.243.18.41 procedure request for: flush-table-proc 2023-05-31 13:56:47,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-31 13:56:47,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,242 INFO [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 13:56:47,242 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 13:56:47,243 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-31 13:56:47,243 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-31 13:56:47,243 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,243 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,245 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:47,245 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 13:56:47,245 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 13:56:47,245 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 13:56:47,245 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:47,246 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-31 13:56:47,246 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,246 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,246 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-31 13:56:47,246 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,246 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,246 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-31 13:56:47,247 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,247 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-31 13:56:47,247 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 13:56:47,247 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-31 13:56:47,248 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-31 13:56:47,248 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-31 13:56:47,248 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. 2023-05-31 13:56:47,248 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. started... 2023-05-31 13:56:47,248 INFO [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing d67df2d2424cef580700a5d4375c9ef8 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-31 13:56:47,260 INFO [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=18 (bloomFilter=true), to=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/.tmp/info/38651817ef204db3ac44a1b86cfe7756 2023-05-31 13:56:47,268 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/.tmp/info/38651817ef204db3ac44a1b86cfe7756 as hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info/38651817ef204db3ac44a1b86cfe7756 2023-05-31 13:56:47,275 INFO [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info/38651817ef204db3ac44a1b86cfe7756, entries=1, sequenceid=18, filesize=5.8 K 2023-05-31 13:56:47,276 INFO [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for d67df2d2424cef580700a5d4375c9ef8 in 28ms, sequenceid=18, compaction requested=false 2023-05-31 13:56:47,276 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for d67df2d2424cef580700a5d4375c9ef8: 2023-05-31 13:56:47,276 DEBUG [rs(jenkins-hbase17.apache.org,33397,1685541355691)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. 2023-05-31 13:56:47,276 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-31 13:56:47,276 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-31 13:56:47,276 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:47,276 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-31 13:56:47,276 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase17.apache.org,33397,1685541355691' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-31 13:56:47,279 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:47,279 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:47,279 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,279 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 13:56:47,279 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 13:56:47,279 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,280 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-31 13:56:47,280 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 13:56:47,280 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 13:56:47,281 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,281 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:47,281 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 13:56:47,281 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase17.apache.org,33397,1685541355691' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-31 13:56:47,282 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@1861ade2[Count = 0] remaining members to acquire global barrier 2023-05-31 13:56:47,282 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-31 13:56:47,282 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,282 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,282 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,282 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,283 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-31 13:56:47,283 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:47,283 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-31 13:56:47,283 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-31 13:56:47,283 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase17.apache.org,33397,1685541355691' in zk 2023-05-31 13:56:47,285 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-31 13:56:47,285 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:47,285 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 13:56:47,285 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:47,285 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 13:56:47,285 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 13:56:47,285 DEBUG [member: 'jenkins-hbase17.apache.org,33397,1685541355691' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-31 13:56:47,286 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 13:56:47,286 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 13:56:47,286 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,286 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:47,287 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 13:56:47,287 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,287 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:47,287 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase17.apache.org,33397,1685541355691': 2023-05-31 13:56:47,287 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase17.apache.org,33397,1685541355691' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-31 13:56:47,287 INFO [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-31 13:56:47,288 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-31 13:56:47,288 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-31 13:56:47,288 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,288 INFO [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-31 13:56:47,289 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,289 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,289 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,289 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,289 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 13:56:47,289 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 13:56:47,289 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 13:56:47,289 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,290 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 13:56:47,290 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 13:56:47,290 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 13:56:47,290 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,291 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:47,291 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,292 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 13:56:47,292 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,293 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:47,293 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:47,293 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 13:56:47,295 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,300 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:47,303 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:47,303 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 13:56:47,303 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,303 DEBUG [(jenkins-hbase17.apache.org,38103,1685541355650)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 13:56:47,303 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,303 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 13:56:47,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-31 13:56:47,303 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 13:56:47,304 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 13:56:47,303 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 13:56:47,304 DEBUG [Listener at localhost.localdomain/41065] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-31 13:56:47,304 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-31 13:56:47,303 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:47,305 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 13:56:47,305 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 13:56:47,305 DEBUG [Listener at localhost.localdomain/41065] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-31 13:56:47,305 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,305 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:47,305 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 13:56:57,305 DEBUG [Listener at localhost.localdomain/41065] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-31 13:56:57,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38103] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-31 13:56:57,330 INFO [Listener at localhost.localdomain/41065] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/WALs/jenkins-hbase17.apache.org,33397,1685541355691/jenkins-hbase17.apache.org%2C33397%2C1685541355691.1685541407209 with entries=3, filesize=1.97 KB; new WAL /user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/WALs/jenkins-hbase17.apache.org,33397,1685541355691/jenkins-hbase17.apache.org%2C33397%2C1685541355691.1685541417316 2023-05-31 13:56:57,330 DEBUG [Listener at localhost.localdomain/41065] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40419,DS-749fe520-e75e-405a-a5ea-133f865916e2,DISK], DatanodeInfoWithStorage[127.0.0.1:42763,DS-bffb245e-8987-4142-9f54-74754d3b92d2,DISK]] 2023-05-31 13:56:57,330 DEBUG [Listener at localhost.localdomain/41065] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/WALs/jenkins-hbase17.apache.org,33397,1685541355691/jenkins-hbase17.apache.org%2C33397%2C1685541355691.1685541407209 is not closed yet, will try archiving it next time 2023-05-31 13:56:57,331 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/WALs/jenkins-hbase17.apache.org,33397,1685541355691/jenkins-hbase17.apache.org%2C33397%2C1685541355691.1685541397075 to hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/oldWALs/jenkins-hbase17.apache.org%2C33397%2C1685541355691.1685541397075 2023-05-31 13:56:57,331 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-31 13:56:57,331 INFO [Listener at localhost.localdomain/41065] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-31 13:56:57,331 DEBUG [Listener at localhost.localdomain/41065] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3a97899d to 127.0.0.1:62916 2023-05-31 13:56:57,331 DEBUG [Listener at localhost.localdomain/41065] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:56:57,332 DEBUG [Listener at localhost.localdomain/41065] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-31 13:56:57,332 DEBUG [Listener at localhost.localdomain/41065] util.JVMClusterUtil(257): Found active master hash=300082462, stopped=false 2023-05-31 13:56:57,332 INFO [Listener at localhost.localdomain/41065] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,38103,1685541355650 2023-05-31 13:56:57,335 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 13:56:57,335 INFO [Listener at localhost.localdomain/41065] procedure2.ProcedureExecutor(629): Stopping 2023-05-31 13:56:57,335 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 13:56:57,335 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:56:57,335 DEBUG [Listener at localhost.localdomain/41065] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5fd02623 to 127.0.0.1:62916 2023-05-31 13:56:57,336 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:56:57,336 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:56:57,336 DEBUG [Listener at localhost.localdomain/41065] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:56:57,336 INFO [Listener at localhost.localdomain/41065] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,33397,1685541355691' ***** 2023-05-31 13:56:57,336 INFO [Listener at localhost.localdomain/41065] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-31 13:56:57,336 INFO [RS:0;jenkins-hbase17:33397] regionserver.HeapMemoryManager(220): Stopping 2023-05-31 13:56:57,336 INFO [RS:0;jenkins-hbase17:33397] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-31 13:56:57,337 INFO [RS:0;jenkins-hbase17:33397] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-31 13:56:57,336 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-31 13:56:57,337 INFO [RS:0;jenkins-hbase17:33397] regionserver.HRegionServer(3303): Received CLOSE for d67df2d2424cef580700a5d4375c9ef8 2023-05-31 13:56:57,341 INFO [RS:0;jenkins-hbase17:33397] regionserver.HRegionServer(3303): Received CLOSE for 58ad17134910443a13bdeacc96421ddb 2023-05-31 13:56:57,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing d67df2d2424cef580700a5d4375c9ef8, disabling compactions & flushes 2023-05-31 13:56:57,341 INFO [RS:0;jenkins-hbase17:33397] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:57,341 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. 2023-05-31 13:56:57,341 DEBUG [RS:0;jenkins-hbase17:33397] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x554d99ed to 127.0.0.1:62916 2023-05-31 13:56:57,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. 2023-05-31 13:56:57,341 DEBUG [RS:0;jenkins-hbase17:33397] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:56:57,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. after waiting 0 ms 2023-05-31 13:56:57,341 INFO [RS:0;jenkins-hbase17:33397] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-31 13:56:57,341 INFO [RS:0;jenkins-hbase17:33397] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-31 13:56:57,341 INFO [RS:0;jenkins-hbase17:33397] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-31 13:56:57,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. 2023-05-31 13:56:57,341 INFO [RS:0;jenkins-hbase17:33397] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-31 13:56:57,341 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing d67df2d2424cef580700a5d4375c9ef8 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-31 13:56:57,341 INFO [RS:0;jenkins-hbase17:33397] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-31 13:56:57,341 DEBUG [RS:0;jenkins-hbase17:33397] regionserver.HRegionServer(1478): Online Regions={d67df2d2424cef580700a5d4375c9ef8=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8., 1588230740=hbase:meta,,1.1588230740, 58ad17134910443a13bdeacc96421ddb=hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb.} 2023-05-31 13:56:57,341 DEBUG [RS:0;jenkins-hbase17:33397] regionserver.HRegionServer(1504): Waiting on 1588230740, 58ad17134910443a13bdeacc96421ddb, d67df2d2424cef580700a5d4375c9ef8 2023-05-31 13:56:57,342 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 13:56:57,343 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 13:56:57,343 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 13:56:57,343 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 13:56:57,343 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 13:56:57,343 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.10 KB heapSize=5.61 KB 2023-05-31 13:56:57,354 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.85 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/1588230740/.tmp/info/7f59ba12229946f0944abf81201552f5 2023-05-31 13:56:57,355 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=22 (bloomFilter=true), to=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/.tmp/info/11b5bf339f1d44859a5bc1e08ecfd073 2023-05-31 13:56:57,363 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/.tmp/info/11b5bf339f1d44859a5bc1e08ecfd073 as hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info/11b5bf339f1d44859a5bc1e08ecfd073 2023-05-31 13:56:57,370 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info/11b5bf339f1d44859a5bc1e08ecfd073, entries=1, sequenceid=22, filesize=5.8 K 2023-05-31 13:56:57,371 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for d67df2d2424cef580700a5d4375c9ef8 in 30ms, sequenceid=22, compaction requested=true 2023-05-31 13:56:57,373 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=264 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/1588230740/.tmp/table/a21907eb6fa84c8fa101c0d4b6188878 2023-05-31 13:56:57,375 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info/0317e73bcd324d3fa02f3f6e7b7f7c83, hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info/804811e6dd1c4350942144db19591b18, hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info/8172231710f5405f8a3c6fc290fcdd04] to archive 2023-05-31 13:56:57,375 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-31 13:56:57,378 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info/0317e73bcd324d3fa02f3f6e7b7f7c83 to hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info/0317e73bcd324d3fa02f3f6e7b7f7c83 2023-05-31 13:56:57,379 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info/804811e6dd1c4350942144db19591b18 to hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info/804811e6dd1c4350942144db19591b18 2023-05-31 13:56:57,381 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info/8172231710f5405f8a3c6fc290fcdd04 to hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/info/8172231710f5405f8a3c6fc290fcdd04 2023-05-31 13:56:57,381 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/1588230740/.tmp/info/7f59ba12229946f0944abf81201552f5 as hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/1588230740/info/7f59ba12229946f0944abf81201552f5 2023-05-31 13:56:57,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d67df2d2424cef580700a5d4375c9ef8/recovered.edits/25.seqid, newMaxSeqId=25, maxSeqId=1 2023-05-31 13:56:57,392 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. 2023-05-31 13:56:57,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for d67df2d2424cef580700a5d4375c9ef8: 2023-05-31 13:56:57,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685541356834.d67df2d2424cef580700a5d4375c9ef8. 2023-05-31 13:56:57,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 58ad17134910443a13bdeacc96421ddb, disabling compactions & flushes 2023-05-31 13:56:57,393 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb. 2023-05-31 13:56:57,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb. 2023-05-31 13:56:57,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb. after waiting 0 ms 2023-05-31 13:56:57,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb. 2023-05-31 13:56:57,394 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/1588230740/info/7f59ba12229946f0944abf81201552f5, entries=20, sequenceid=14, filesize=7.6 K 2023-05-31 13:56:57,396 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/1588230740/.tmp/table/a21907eb6fa84c8fa101c0d4b6188878 as hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/1588230740/table/a21907eb6fa84c8fa101c0d4b6188878 2023-05-31 13:56:57,398 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/namespace/58ad17134910443a13bdeacc96421ddb/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-31 13:56:57,399 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb. 2023-05-31 13:56:57,399 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 58ad17134910443a13bdeacc96421ddb: 2023-05-31 13:56:57,399 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685541356262.58ad17134910443a13bdeacc96421ddb. 2023-05-31 13:56:57,402 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/1588230740/table/a21907eb6fa84c8fa101c0d4b6188878, entries=4, sequenceid=14, filesize=4.9 K 2023-05-31 13:56:57,403 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.10 KB/3178, heapSize ~5.33 KB/5456, currentSize=0 B/0 for 1588230740 in 60ms, sequenceid=14, compaction requested=false 2023-05-31 13:56:57,409 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-05-31 13:56:57,410 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-31 13:56:57,410 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 13:56:57,410 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 13:56:57,410 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-31 13:56:57,542 INFO [RS:0;jenkins-hbase17:33397] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,33397,1685541355691; all regions closed. 2023-05-31 13:56:57,543 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/WALs/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:57,549 DEBUG [RS:0;jenkins-hbase17:33397] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/oldWALs 2023-05-31 13:56:57,550 INFO [RS:0;jenkins-hbase17:33397] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase17.apache.org%2C33397%2C1685541355691.meta:.meta(num 1685541356200) 2023-05-31 13:56:57,550 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/WALs/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:57,557 DEBUG [RS:0;jenkins-hbase17:33397] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/oldWALs 2023-05-31 13:56:57,557 INFO [RS:0;jenkins-hbase17:33397] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase17.apache.org%2C33397%2C1685541355691:(num 1685541417316) 2023-05-31 13:56:57,557 DEBUG [RS:0;jenkins-hbase17:33397] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:56:57,557 INFO [RS:0;jenkins-hbase17:33397] regionserver.LeaseManager(133): Closed leases 2023-05-31 13:56:57,557 INFO [RS:0;jenkins-hbase17:33397] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-31 13:56:57,557 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 13:56:57,558 INFO [RS:0;jenkins-hbase17:33397] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:33397 2023-05-31 13:56:57,561 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,33397,1685541355691 2023-05-31 13:56:57,561 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 13:56:57,561 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 13:56:57,562 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,33397,1685541355691] 2023-05-31 13:56:57,562 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,33397,1685541355691; numProcessing=1 2023-05-31 13:56:57,563 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,33397,1685541355691 already deleted, retry=false 2023-05-31 13:56:57,563 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,33397,1685541355691 expired; onlineServers=0 2023-05-31 13:56:57,563 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,38103,1685541355650' ***** 2023-05-31 13:56:57,563 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-31 13:56:57,563 DEBUG [M:0;jenkins-hbase17:38103] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@15dd63fa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-05-31 13:56:57,564 INFO [M:0;jenkins-hbase17:38103] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,38103,1685541355650 2023-05-31 13:56:57,564 INFO [M:0;jenkins-hbase17:38103] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,38103,1685541355650; all regions closed. 2023-05-31 13:56:57,564 DEBUG [M:0;jenkins-hbase17:38103] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:56:57,564 DEBUG [M:0;jenkins-hbase17:38103] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-31 13:56:57,564 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-31 13:56:57,564 DEBUG [M:0;jenkins-hbase17:38103] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-31 13:56:57,564 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1685541355836] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1685541355836,5,FailOnTimeoutGroup] 2023-05-31 13:56:57,564 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1685541355836] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1685541355836,5,FailOnTimeoutGroup] 2023-05-31 13:56:57,566 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-31 13:56:57,565 INFO [M:0;jenkins-hbase17:38103] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-31 13:56:57,566 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:56:57,566 INFO [M:0;jenkins-hbase17:38103] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-31 13:56:57,566 INFO [M:0;jenkins-hbase17:38103] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [] on shutdown 2023-05-31 13:56:57,566 DEBUG [M:0;jenkins-hbase17:38103] master.HMaster(1512): Stopping service threads 2023-05-31 13:56:57,566 INFO [M:0;jenkins-hbase17:38103] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-31 13:56:57,567 ERROR [M:0;jenkins-hbase17:38103] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-31 13:56:57,567 INFO [M:0;jenkins-hbase17:38103] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-31 13:56:57,567 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-31 13:56:57,567 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 13:56:57,567 DEBUG [M:0;jenkins-hbase17:38103] zookeeper.ZKUtil(398): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-31 13:56:57,567 WARN [M:0;jenkins-hbase17:38103] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-31 13:56:57,567 INFO [M:0;jenkins-hbase17:38103] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-31 13:56:57,567 INFO [M:0;jenkins-hbase17:38103] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-31 13:56:57,568 DEBUG [M:0;jenkins-hbase17:38103] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 13:56:57,568 INFO [M:0;jenkins-hbase17:38103] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:56:57,568 DEBUG [M:0;jenkins-hbase17:38103] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:56:57,568 DEBUG [M:0;jenkins-hbase17:38103] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 13:56:57,568 DEBUG [M:0;jenkins-hbase17:38103] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:56:57,568 INFO [M:0;jenkins-hbase17:38103] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.93 KB heapSize=47.38 KB 2023-05-31 13:56:57,580 INFO [M:0;jenkins-hbase17:38103] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.93 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/4ab78205b1bd47399d6ef65fe52d707c 2023-05-31 13:56:57,587 INFO [M:0;jenkins-hbase17:38103] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4ab78205b1bd47399d6ef65fe52d707c 2023-05-31 13:56:57,588 DEBUG [M:0;jenkins-hbase17:38103] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/4ab78205b1bd47399d6ef65fe52d707c as hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/4ab78205b1bd47399d6ef65fe52d707c 2023-05-31 13:56:57,595 INFO [M:0;jenkins-hbase17:38103] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4ab78205b1bd47399d6ef65fe52d707c 2023-05-31 13:56:57,595 INFO [M:0;jenkins-hbase17:38103] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35345/user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/4ab78205b1bd47399d6ef65fe52d707c, entries=11, sequenceid=100, filesize=6.1 K 2023-05-31 13:56:57,596 INFO [M:0;jenkins-hbase17:38103] regionserver.HRegion(2948): Finished flush of dataSize ~38.93 KB/39866, heapSize ~47.36 KB/48496, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 28ms, sequenceid=100, compaction requested=false 2023-05-31 13:56:57,598 INFO [M:0;jenkins-hbase17:38103] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:56:57,598 DEBUG [M:0;jenkins-hbase17:38103] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 13:56:57,598 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/2222a300-a748-f009-40ac-2a0fbc9283cb/MasterData/WALs/jenkins-hbase17.apache.org,38103,1685541355650 2023-05-31 13:56:57,602 INFO [M:0;jenkins-hbase17:38103] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-31 13:56:57,602 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 13:56:57,603 INFO [M:0;jenkins-hbase17:38103] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:38103 2023-05-31 13:56:57,604 DEBUG [M:0;jenkins-hbase17:38103] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,38103,1685541355650 already deleted, retry=false 2023-05-31 13:56:57,662 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:56:57,662 INFO [RS:0;jenkins-hbase17:33397] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,33397,1685541355691; zookeeper connection closed. 2023-05-31 13:56:57,662 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x10081857b880001, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:56:57,663 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1fa272ea] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1fa272ea 2023-05-31 13:56:57,663 INFO [Listener at localhost.localdomain/41065] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-31 13:56:57,762 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:56:57,762 INFO [M:0;jenkins-hbase17:38103] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,38103,1685541355650; zookeeper connection closed. 2023-05-31 13:56:57,763 DEBUG [Listener at localhost.localdomain/41065-EventThread] zookeeper.ZKWatcher(600): master:38103-0x10081857b880000, quorum=127.0.0.1:62916, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:56:57,763 WARN [Listener at localhost.localdomain/41065] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 13:56:57,768 INFO [Listener at localhost.localdomain/41065] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 13:56:57,877 WARN [BP-57075428-136.243.18.41-1685541355172 heartbeating to localhost.localdomain/127.0.0.1:35345] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 13:56:57,877 WARN [BP-57075428-136.243.18.41-1685541355172 heartbeating to localhost.localdomain/127.0.0.1:35345] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-57075428-136.243.18.41-1685541355172 (Datanode Uuid 3d33bf98-5a35-4ffe-98b7-ad9436d61ee1) service to localhost.localdomain/127.0.0.1:35345 2023-05-31 13:56:57,878 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/cluster_0c33f31b-66e9-d379-6358-1cdd9909ea28/dfs/data/data3/current/BP-57075428-136.243.18.41-1685541355172] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:56:57,878 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/cluster_0c33f31b-66e9-d379-6358-1cdd9909ea28/dfs/data/data4/current/BP-57075428-136.243.18.41-1685541355172] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:56:57,880 WARN [Listener at localhost.localdomain/41065] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 13:56:57,884 INFO [Listener at localhost.localdomain/41065] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 13:56:57,963 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-31 13:56:57,996 WARN [BP-57075428-136.243.18.41-1685541355172 heartbeating to localhost.localdomain/127.0.0.1:35345] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 13:56:57,997 WARN [BP-57075428-136.243.18.41-1685541355172 heartbeating to localhost.localdomain/127.0.0.1:35345] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-57075428-136.243.18.41-1685541355172 (Datanode Uuid 2533f778-ea21-438f-ade5-b85e37ed9690) service to localhost.localdomain/127.0.0.1:35345 2023-05-31 13:56:57,999 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/cluster_0c33f31b-66e9-d379-6358-1cdd9909ea28/dfs/data/data1/current/BP-57075428-136.243.18.41-1685541355172] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:56:58,000 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/cluster_0c33f31b-66e9-d379-6358-1cdd9909ea28/dfs/data/data2/current/BP-57075428-136.243.18.41-1685541355172] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:56:58,014 INFO [Listener at localhost.localdomain/41065] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-31 13:56:58,129 INFO [Listener at localhost.localdomain/41065] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-31 13:56:58,156 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-31 13:56:58,164 INFO [Listener at localhost.localdomain/41065] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=95 (was 87) - Thread LEAK? -, OpenFileDescriptor=499 (was 460) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=137 (was 133) - SystemLoadAverage LEAK? -, ProcessCount=168 (was 171), AvailableMemoryMB=7329 (was 7388) 2023-05-31 13:56:58,171 INFO [Listener at localhost.localdomain/41065] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRolling Thread=96, OpenFileDescriptor=499, MaxFileDescriptor=60000, SystemLoadAverage=137, ProcessCount=168, AvailableMemoryMB=7329 2023-05-31 13:56:58,171 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-31 13:56:58,171 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/hadoop.log.dir so I do NOT create it in target/test-data/daca14f1-68ca-392d-c507-4d7339719b62 2023-05-31 13:56:58,171 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8eed5065-e7f5-3272-eb6d-b4ed558d817a/hadoop.tmp.dir so I do NOT create it in target/test-data/daca14f1-68ca-392d-c507-4d7339719b62 2023-05-31 13:56:58,171 INFO [Listener at localhost.localdomain/41065] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/cluster_92e29406-5668-e58b-f33f-a50536b257be, deleteOnExit=true 2023-05-31 13:56:58,171 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-31 13:56:58,172 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/test.cache.data in system properties and HBase conf 2023-05-31 13:56:58,172 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/hadoop.tmp.dir in system properties and HBase conf 2023-05-31 13:56:58,172 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/hadoop.log.dir in system properties and HBase conf 2023-05-31 13:56:58,172 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-31 13:56:58,172 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-31 13:56:58,172 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-31 13:56:58,172 DEBUG [Listener at localhost.localdomain/41065] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-31 13:56:58,172 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-31 13:56:58,173 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-31 13:56:58,173 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-31 13:56:58,173 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 13:56:58,173 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-31 13:56:58,173 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-31 13:56:58,173 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 13:56:58,173 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 13:56:58,173 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-31 13:56:58,173 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/nfs.dump.dir in system properties and HBase conf 2023-05-31 13:56:58,173 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/java.io.tmpdir in system properties and HBase conf 2023-05-31 13:56:58,173 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 13:56:58,174 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-31 13:56:58,174 INFO [Listener at localhost.localdomain/41065] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-31 13:56:58,175 WARN [Listener at localhost.localdomain/41065] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 13:56:58,176 WARN [Listener at localhost.localdomain/41065] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 13:56:58,176 WARN [Listener at localhost.localdomain/41065] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 13:56:58,199 WARN [Listener at localhost.localdomain/41065] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:56:58,201 INFO [Listener at localhost.localdomain/41065] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:56:58,206 INFO [Listener at localhost.localdomain/41065] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/java.io.tmpdir/Jetty_localhost_localdomain_42945_hdfs____.oce57u/webapp 2023-05-31 13:56:58,280 INFO [Listener at localhost.localdomain/41065] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:42945 2023-05-31 13:56:58,282 WARN [Listener at localhost.localdomain/41065] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 13:56:58,283 WARN [Listener at localhost.localdomain/41065] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 13:56:58,284 WARN [Listener at localhost.localdomain/41065] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 13:56:58,309 WARN [Listener at localhost.localdomain/40225] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:56:58,353 WARN [Listener at localhost.localdomain/40225] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 13:56:58,355 WARN [Listener at localhost.localdomain/40225] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:56:58,356 INFO [Listener at localhost.localdomain/40225] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:56:58,361 INFO [Listener at localhost.localdomain/40225] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/java.io.tmpdir/Jetty_localhost_44415_datanode____.ok408x/webapp 2023-05-31 13:56:58,431 INFO [Listener at localhost.localdomain/40225] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44415 2023-05-31 13:56:58,435 WARN [Listener at localhost.localdomain/43367] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:56:58,445 WARN [Listener at localhost.localdomain/43367] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 13:56:58,447 WARN [Listener at localhost.localdomain/43367] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:56:58,448 INFO [Listener at localhost.localdomain/43367] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:56:58,452 INFO [Listener at localhost.localdomain/43367] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/java.io.tmpdir/Jetty_localhost_33201_datanode____afuri2/webapp 2023-05-31 13:56:58,487 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb1e086f2734b909c: Processing first storage report for DS-7c112df3-8de2-4cd4-aeba-1f3929bcfa7a from datanode 428e0772-ec96-45d3-a469-5e068b10505d 2023-05-31 13:56:58,487 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb1e086f2734b909c: from storage DS-7c112df3-8de2-4cd4-aeba-1f3929bcfa7a node DatanodeRegistration(127.0.0.1:46077, datanodeUuid=428e0772-ec96-45d3-a469-5e068b10505d, infoPort=34629, infoSecurePort=0, ipcPort=43367, storageInfo=lv=-57;cid=testClusterID;nsid=17516913;c=1685541418177), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-31 13:56:58,487 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb1e086f2734b909c: Processing first storage report for DS-af4c2c5a-c2a9-40e4-b154-000ea2869ed7 from datanode 428e0772-ec96-45d3-a469-5e068b10505d 2023-05-31 13:56:58,487 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb1e086f2734b909c: from storage DS-af4c2c5a-c2a9-40e4-b154-000ea2869ed7 node DatanodeRegistration(127.0.0.1:46077, datanodeUuid=428e0772-ec96-45d3-a469-5e068b10505d, infoPort=34629, infoSecurePort=0, ipcPort=43367, storageInfo=lv=-57;cid=testClusterID;nsid=17516913;c=1685541418177), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:56:58,530 INFO [Listener at localhost.localdomain/43367] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33201 2023-05-31 13:56:58,537 WARN [Listener at localhost.localdomain/43373] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:56:58,589 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1611ace668861c1c: Processing first storage report for DS-3b20b628-1ef6-45cf-8556-72c9c9875eb2 from datanode c6cebfaf-ffe2-4c03-93d2-607def6d11b5 2023-05-31 13:56:58,589 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1611ace668861c1c: from storage DS-3b20b628-1ef6-45cf-8556-72c9c9875eb2 node DatanodeRegistration(127.0.0.1:42485, datanodeUuid=c6cebfaf-ffe2-4c03-93d2-607def6d11b5, infoPort=36637, infoSecurePort=0, ipcPort=43373, storageInfo=lv=-57;cid=testClusterID;nsid=17516913;c=1685541418177), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:56:58,589 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1611ace668861c1c: Processing first storage report for DS-04d1da7f-349d-4d38-8d6f-ea330b8fa702 from datanode c6cebfaf-ffe2-4c03-93d2-607def6d11b5 2023-05-31 13:56:58,589 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1611ace668861c1c: from storage DS-04d1da7f-349d-4d38-8d6f-ea330b8fa702 node DatanodeRegistration(127.0.0.1:42485, datanodeUuid=c6cebfaf-ffe2-4c03-93d2-607def6d11b5, infoPort=36637, infoSecurePort=0, ipcPort=43373, storageInfo=lv=-57;cid=testClusterID;nsid=17516913;c=1685541418177), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:56:58,646 DEBUG [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62 2023-05-31 13:56:58,650 INFO [Listener at localhost.localdomain/43373] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/cluster_92e29406-5668-e58b-f33f-a50536b257be/zookeeper_0, clientPort=61551, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/cluster_92e29406-5668-e58b-f33f-a50536b257be/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/cluster_92e29406-5668-e58b-f33f-a50536b257be/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-31 13:56:58,652 INFO [Listener at localhost.localdomain/43373] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=61551 2023-05-31 13:56:58,652 INFO [Listener at localhost.localdomain/43373] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:56:58,653 INFO [Listener at localhost.localdomain/43373] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:56:58,670 INFO [Listener at localhost.localdomain/43373] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5 with version=8 2023-05-31 13:56:58,670 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/hbase-staging 2023-05-31 13:56:58,672 INFO [Listener at localhost.localdomain/43373] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-05-31 13:56:58,673 INFO [Listener at localhost.localdomain/43373] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:56:58,673 INFO [Listener at localhost.localdomain/43373] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 13:56:58,673 INFO [Listener at localhost.localdomain/43373] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 13:56:58,673 INFO [Listener at localhost.localdomain/43373] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:56:58,673 INFO [Listener at localhost.localdomain/43373] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 13:56:58,674 INFO [Listener at localhost.localdomain/43373] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 13:56:58,675 INFO [Listener at localhost.localdomain/43373] ipc.NettyRpcServer(120): Bind to /136.243.18.41:39261 2023-05-31 13:56:58,676 INFO [Listener at localhost.localdomain/43373] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:56:58,677 INFO [Listener at localhost.localdomain/43373] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:56:58,678 INFO [Listener at localhost.localdomain/43373] zookeeper.RecoverableZooKeeper(93): Process identifier=master:39261 connecting to ZooKeeper ensemble=127.0.0.1:61551 2023-05-31 13:56:58,683 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:392610x0, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 13:56:58,684 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:39261-0x100818671b60000 connected 2023-05-31 13:56:58,700 DEBUG [Listener at localhost.localdomain/43373] zookeeper.ZKUtil(164): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 13:56:58,701 DEBUG [Listener at localhost.localdomain/43373] zookeeper.ZKUtil(164): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:56:58,701 DEBUG [Listener at localhost.localdomain/43373] zookeeper.ZKUtil(164): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 13:56:58,703 DEBUG [Listener at localhost.localdomain/43373] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39261 2023-05-31 13:56:58,704 DEBUG [Listener at localhost.localdomain/43373] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39261 2023-05-31 13:56:58,706 DEBUG [Listener at localhost.localdomain/43373] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39261 2023-05-31 13:56:58,706 DEBUG [Listener at localhost.localdomain/43373] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39261 2023-05-31 13:56:58,706 DEBUG [Listener at localhost.localdomain/43373] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39261 2023-05-31 13:56:58,707 INFO [Listener at localhost.localdomain/43373] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5, hbase.cluster.distributed=false 2023-05-31 13:56:58,717 INFO [Listener at localhost.localdomain/43373] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-05-31 13:56:58,718 INFO [Listener at localhost.localdomain/43373] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:56:58,718 INFO [Listener at localhost.localdomain/43373] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 13:56:58,718 INFO [Listener at localhost.localdomain/43373] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 13:56:58,718 INFO [Listener at localhost.localdomain/43373] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:56:58,718 INFO [Listener at localhost.localdomain/43373] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 13:56:58,718 INFO [Listener at localhost.localdomain/43373] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 13:56:58,721 INFO [Listener at localhost.localdomain/43373] ipc.NettyRpcServer(120): Bind to /136.243.18.41:38551 2023-05-31 13:56:58,721 INFO [Listener at localhost.localdomain/43373] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-31 13:56:58,726 DEBUG [Listener at localhost.localdomain/43373] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-31 13:56:58,727 INFO [Listener at localhost.localdomain/43373] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:56:58,728 INFO [Listener at localhost.localdomain/43373] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:56:58,729 INFO [Listener at localhost.localdomain/43373] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38551 connecting to ZooKeeper ensemble=127.0.0.1:61551 2023-05-31 13:56:58,731 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): regionserver:385510x0, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 13:56:58,732 DEBUG [Listener at localhost.localdomain/43373] zookeeper.ZKUtil(164): regionserver:385510x0, quorum=127.0.0.1:61551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 13:56:58,733 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38551-0x100818671b60001 connected 2023-05-31 13:56:58,733 DEBUG [Listener at localhost.localdomain/43373] zookeeper.ZKUtil(164): regionserver:38551-0x100818671b60001, quorum=127.0.0.1:61551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:56:58,734 DEBUG [Listener at localhost.localdomain/43373] zookeeper.ZKUtil(164): regionserver:38551-0x100818671b60001, quorum=127.0.0.1:61551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 13:56:58,735 DEBUG [Listener at localhost.localdomain/43373] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38551 2023-05-31 13:56:58,735 DEBUG [Listener at localhost.localdomain/43373] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38551 2023-05-31 13:56:58,735 DEBUG [Listener at localhost.localdomain/43373] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38551 2023-05-31 13:56:58,735 DEBUG [Listener at localhost.localdomain/43373] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38551 2023-05-31 13:56:58,735 DEBUG [Listener at localhost.localdomain/43373] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38551 2023-05-31 13:56:58,736 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,39261,1685541418672 2023-05-31 13:56:58,738 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 13:56:58,738 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,39261,1685541418672 2023-05-31 13:56:58,739 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 13:56:58,739 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): regionserver:38551-0x100818671b60001, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 13:56:58,739 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:56:58,739 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 13:56:58,740 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 13:56:58,740 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,39261,1685541418672 from backup master directory 2023-05-31 13:56:58,741 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,39261,1685541418672 2023-05-31 13:56:58,741 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 13:56:58,741 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 13:56:58,741 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,39261,1685541418672 2023-05-31 13:56:58,755 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/hbase.id with ID: c6d5f349-cfaa-4717-a81c-630a2e132580 2023-05-31 13:56:58,765 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:56:58,766 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:56:58,778 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x54d43635 to 127.0.0.1:61551 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 13:56:58,781 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2a54c628, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 13:56:58,782 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 13:56:58,782 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-31 13:56:58,783 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 13:56:58,784 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/MasterData/data/master/store-tmp 2023-05-31 13:56:58,793 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:56:58,793 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 13:56:58,793 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:56:58,793 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:56:58,793 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 13:56:58,794 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:56:58,794 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:56:58,794 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 13:56:58,794 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/MasterData/WALs/jenkins-hbase17.apache.org,39261,1685541418672 2023-05-31 13:56:58,797 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C39261%2C1685541418672, suffix=, logDir=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/MasterData/WALs/jenkins-hbase17.apache.org,39261,1685541418672, archiveDir=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/MasterData/oldWALs, maxLogs=10 2023-05-31 13:56:58,804 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/MasterData/WALs/jenkins-hbase17.apache.org,39261,1685541418672/jenkins-hbase17.apache.org%2C39261%2C1685541418672.1685541418797 2023-05-31 13:56:58,804 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46077,DS-7c112df3-8de2-4cd4-aeba-1f3929bcfa7a,DISK], DatanodeInfoWithStorage[127.0.0.1:42485,DS-3b20b628-1ef6-45cf-8556-72c9c9875eb2,DISK]] 2023-05-31 13:56:58,804 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-31 13:56:58,804 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:56:58,804 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:56:58,804 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:56:58,807 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:56:58,808 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-31 13:56:58,809 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-31 13:56:58,809 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:56:58,810 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:56:58,810 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:56:58,813 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:56:58,817 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 13:56:58,818 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=733711, jitterRate=-0.06703884899616241}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 13:56:58,818 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 13:56:58,820 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-31 13:56:58,821 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-31 13:56:58,821 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-31 13:56:58,821 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-31 13:56:58,822 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-31 13:56:58,822 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-31 13:56:58,822 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-31 13:56:58,824 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-31 13:56:58,825 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-31 13:56:58,836 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-31 13:56:58,836 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-31 13:56:58,836 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-31 13:56:58,836 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-31 13:56:58,837 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-31 13:56:58,838 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:56:58,839 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-31 13:56:58,839 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-31 13:56:58,840 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-31 13:56:58,840 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): regionserver:38551-0x100818671b60001, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 13:56:58,840 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 13:56:58,840 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:56:58,841 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,39261,1685541418672, sessionid=0x100818671b60000, setting cluster-up flag (Was=false) 2023-05-31 13:56:58,843 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:56:58,846 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-31 13:56:58,847 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,39261,1685541418672 2023-05-31 13:56:58,849 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:56:58,851 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-31 13:56:58,852 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,39261,1685541418672 2023-05-31 13:56:58,853 WARN [master/jenkins-hbase17:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/.hbase-snapshot/.tmp 2023-05-31 13:56:58,855 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-31 13:56:58,856 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-05-31 13:56:58,856 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-05-31 13:56:58,856 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-05-31 13:56:58,856 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-05-31 13:56:58,856 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-05-31 13:56:58,856 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:56:58,856 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-05-31 13:56:58,856 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:56:58,858 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685541448858 2023-05-31 13:56:58,858 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-31 13:56:58,858 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-31 13:56:58,859 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-31 13:56:58,859 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-31 13:56:58,859 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-31 13:56:58,859 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-31 13:56:58,859 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 13:56:58,860 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-31 13:56:58,860 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-31 13:56:58,860 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 13:56:58,860 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-31 13:56:58,860 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-31 13:56:58,860 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-31 13:56:58,860 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-31 13:56:58,861 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1685541418861,5,FailOnTimeoutGroup] 2023-05-31 13:56:58,861 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1685541418861,5,FailOnTimeoutGroup] 2023-05-31 13:56:58,861 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 13:56:58,861 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-31 13:56:58,861 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-31 13:56:58,861 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-31 13:56:58,862 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 13:56:58,874 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 13:56:58,874 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 13:56:58,874 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5 2023-05-31 13:56:58,883 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:56:58,884 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 13:56:58,885 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740/info 2023-05-31 13:56:58,885 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 13:56:58,886 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:56:58,886 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 13:56:58,887 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740/rep_barrier 2023-05-31 13:56:58,888 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 13:56:58,889 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:56:58,889 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 13:56:58,890 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740/table 2023-05-31 13:56:58,890 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 13:56:58,891 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:56:58,892 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740 2023-05-31 13:56:58,892 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740 2023-05-31 13:56:58,894 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 13:56:58,895 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 13:56:58,896 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 13:56:58,897 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=790264, jitterRate=0.004872724413871765}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 13:56:58,897 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 13:56:58,897 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 13:56:58,897 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 13:56:58,897 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 13:56:58,897 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 13:56:58,897 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 13:56:58,897 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 13:56:58,897 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 13:56:58,898 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 13:56:58,898 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-31 13:56:58,899 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-31 13:56:58,900 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-31 13:56:58,901 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-31 13:56:58,938 INFO [RS:0;jenkins-hbase17:38551] regionserver.HRegionServer(951): ClusterId : c6d5f349-cfaa-4717-a81c-630a2e132580 2023-05-31 13:56:58,939 DEBUG [RS:0;jenkins-hbase17:38551] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-31 13:56:58,942 DEBUG [RS:0;jenkins-hbase17:38551] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-31 13:56:58,942 DEBUG [RS:0;jenkins-hbase17:38551] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-31 13:56:58,944 DEBUG [RS:0;jenkins-hbase17:38551] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-31 13:56:58,945 DEBUG [RS:0;jenkins-hbase17:38551] zookeeper.ReadOnlyZKClient(139): Connect 0x404f6623 to 127.0.0.1:61551 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 13:56:58,950 DEBUG [RS:0;jenkins-hbase17:38551] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4e88805a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 13:56:58,950 DEBUG [RS:0;jenkins-hbase17:38551] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4ab70d53, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-05-31 13:56:58,962 DEBUG [RS:0;jenkins-hbase17:38551] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:38551 2023-05-31 13:56:58,962 INFO [RS:0;jenkins-hbase17:38551] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-31 13:56:58,962 INFO [RS:0;jenkins-hbase17:38551] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-31 13:56:58,962 DEBUG [RS:0;jenkins-hbase17:38551] regionserver.HRegionServer(1022): About to register with Master. 2023-05-31 13:56:58,962 INFO [RS:0;jenkins-hbase17:38551] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase17.apache.org,39261,1685541418672 with isa=jenkins-hbase17.apache.org/136.243.18.41:38551, startcode=1685541418717 2023-05-31 13:56:58,962 DEBUG [RS:0;jenkins-hbase17:38551] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-31 13:56:58,965 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:36955, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-05-31 13:56:58,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:56:58,966 DEBUG [RS:0;jenkins-hbase17:38551] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5 2023-05-31 13:56:58,966 DEBUG [RS:0;jenkins-hbase17:38551] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:40225 2023-05-31 13:56:58,966 DEBUG [RS:0;jenkins-hbase17:38551] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-31 13:56:58,968 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 13:56:58,968 DEBUG [RS:0;jenkins-hbase17:38551] zookeeper.ZKUtil(162): regionserver:38551-0x100818671b60001, quorum=127.0.0.1:61551, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:56:58,968 WARN [RS:0;jenkins-hbase17:38551] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 13:56:58,968 INFO [RS:0;jenkins-hbase17:38551] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 13:56:58,968 DEBUG [RS:0;jenkins-hbase17:38551] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/WALs/jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:56:58,968 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,38551,1685541418717] 2023-05-31 13:56:58,972 DEBUG [RS:0;jenkins-hbase17:38551] zookeeper.ZKUtil(162): regionserver:38551-0x100818671b60001, quorum=127.0.0.1:61551, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:56:58,973 DEBUG [RS:0;jenkins-hbase17:38551] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-31 13:56:58,973 INFO [RS:0;jenkins-hbase17:38551] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-31 13:56:58,974 INFO [RS:0;jenkins-hbase17:38551] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-31 13:56:58,974 INFO [RS:0;jenkins-hbase17:38551] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-31 13:56:58,974 INFO [RS:0;jenkins-hbase17:38551] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 13:56:58,975 INFO [RS:0;jenkins-hbase17:38551] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-31 13:56:58,976 INFO [RS:0;jenkins-hbase17:38551] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-31 13:56:58,976 DEBUG [RS:0;jenkins-hbase17:38551] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:56:58,976 DEBUG [RS:0;jenkins-hbase17:38551] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:56:58,976 DEBUG [RS:0;jenkins-hbase17:38551] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:56:58,976 DEBUG [RS:0;jenkins-hbase17:38551] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:56:58,976 DEBUG [RS:0;jenkins-hbase17:38551] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:56:58,976 DEBUG [RS:0;jenkins-hbase17:38551] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-05-31 13:56:58,976 DEBUG [RS:0;jenkins-hbase17:38551] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:56:58,977 DEBUG [RS:0;jenkins-hbase17:38551] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:56:58,977 DEBUG [RS:0;jenkins-hbase17:38551] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:56:58,977 DEBUG [RS:0;jenkins-hbase17:38551] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:56:58,978 INFO [RS:0;jenkins-hbase17:38551] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 13:56:58,978 INFO [RS:0;jenkins-hbase17:38551] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 13:56:58,978 INFO [RS:0;jenkins-hbase17:38551] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-31 13:56:58,988 INFO [RS:0;jenkins-hbase17:38551] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-31 13:56:58,988 INFO [RS:0;jenkins-hbase17:38551] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38551,1685541418717-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 13:56:58,998 INFO [RS:0;jenkins-hbase17:38551] regionserver.Replication(203): jenkins-hbase17.apache.org,38551,1685541418717 started 2023-05-31 13:56:58,998 INFO [RS:0;jenkins-hbase17:38551] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,38551,1685541418717, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:38551, sessionid=0x100818671b60001 2023-05-31 13:56:58,998 DEBUG [RS:0;jenkins-hbase17:38551] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-31 13:56:58,998 DEBUG [RS:0;jenkins-hbase17:38551] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:56:58,998 DEBUG [RS:0;jenkins-hbase17:38551] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,38551,1685541418717' 2023-05-31 13:56:58,998 DEBUG [RS:0;jenkins-hbase17:38551] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 13:56:58,998 DEBUG [RS:0;jenkins-hbase17:38551] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 13:56:58,999 DEBUG [RS:0;jenkins-hbase17:38551] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-31 13:56:58,999 DEBUG [RS:0;jenkins-hbase17:38551] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-31 13:56:58,999 DEBUG [RS:0;jenkins-hbase17:38551] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:56:58,999 DEBUG [RS:0;jenkins-hbase17:38551] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,38551,1685541418717' 2023-05-31 13:56:58,999 DEBUG [RS:0;jenkins-hbase17:38551] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-31 13:56:58,999 DEBUG [RS:0;jenkins-hbase17:38551] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-31 13:56:58,999 DEBUG [RS:0;jenkins-hbase17:38551] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-31 13:56:59,000 INFO [RS:0;jenkins-hbase17:38551] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-31 13:56:59,000 INFO [RS:0;jenkins-hbase17:38551] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-31 13:56:59,051 DEBUG [jenkins-hbase17:39261] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-31 13:56:59,052 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,38551,1685541418717, state=OPENING 2023-05-31 13:56:59,054 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-31 13:56:59,055 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:56:59,056 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,38551,1685541418717}] 2023-05-31 13:56:59,056 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 13:56:59,101 INFO [RS:0;jenkins-hbase17:38551] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C38551%2C1685541418717, suffix=, logDir=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/WALs/jenkins-hbase17.apache.org,38551,1685541418717, archiveDir=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/oldWALs, maxLogs=32 2023-05-31 13:56:59,109 INFO [RS:0;jenkins-hbase17:38551] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/WALs/jenkins-hbase17.apache.org,38551,1685541418717/jenkins-hbase17.apache.org%2C38551%2C1685541418717.1685541419102 2023-05-31 13:56:59,109 DEBUG [RS:0;jenkins-hbase17:38551] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42485,DS-3b20b628-1ef6-45cf-8556-72c9c9875eb2,DISK], DatanodeInfoWithStorage[127.0.0.1:46077,DS-7c112df3-8de2-4cd4-aeba-1f3929bcfa7a,DISK]] 2023-05-31 13:56:59,213 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:56:59,213 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-31 13:56:59,218 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:35298, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-31 13:56:59,224 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-31 13:56:59,224 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 13:56:59,227 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C38551%2C1685541418717.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/WALs/jenkins-hbase17.apache.org,38551,1685541418717, archiveDir=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/oldWALs, maxLogs=32 2023-05-31 13:56:59,235 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/WALs/jenkins-hbase17.apache.org,38551,1685541418717/jenkins-hbase17.apache.org%2C38551%2C1685541418717.meta.1685541419227.meta 2023-05-31 13:56:59,235 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46077,DS-7c112df3-8de2-4cd4-aeba-1f3929bcfa7a,DISK], DatanodeInfoWithStorage[127.0.0.1:42485,DS-3b20b628-1ef6-45cf-8556-72c9c9875eb2,DISK]] 2023-05-31 13:56:59,235 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-31 13:56:59,236 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-31 13:56:59,236 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-31 13:56:59,236 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-31 13:56:59,236 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-31 13:56:59,236 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:56:59,236 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-31 13:56:59,236 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-31 13:56:59,238 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 13:56:59,239 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740/info 2023-05-31 13:56:59,239 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740/info 2023-05-31 13:56:59,239 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 13:56:59,239 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:56:59,239 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 13:56:59,240 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740/rep_barrier 2023-05-31 13:56:59,240 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740/rep_barrier 2023-05-31 13:56:59,241 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 13:56:59,241 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:56:59,241 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 13:56:59,242 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740/table 2023-05-31 13:56:59,242 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740/table 2023-05-31 13:56:59,242 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 13:56:59,243 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:56:59,243 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740 2023-05-31 13:56:59,244 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740 2023-05-31 13:56:59,246 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 13:56:59,247 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 13:56:59,248 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=707505, jitterRate=-0.1003614068031311}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 13:56:59,248 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 13:56:59,250 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685541419213 2023-05-31 13:56:59,254 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-31 13:56:59,254 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-31 13:56:59,255 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,38551,1685541418717, state=OPEN 2023-05-31 13:56:59,256 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-31 13:56:59,256 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 13:56:59,259 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-31 13:56:59,259 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,38551,1685541418717 in 200 msec 2023-05-31 13:56:59,262 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-31 13:56:59,262 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 361 msec 2023-05-31 13:56:59,265 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 410 msec 2023-05-31 13:56:59,265 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685541419265, completionTime=-1 2023-05-31 13:56:59,265 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-31 13:56:59,265 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-31 13:56:59,268 DEBUG [hconnection-0x7d6ed3eb-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 13:56:59,270 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:35310, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 13:56:59,271 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-31 13:56:59,272 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685541479272 2023-05-31 13:56:59,272 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685541539272 2023-05-31 13:56:59,272 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-31 13:56:59,277 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,39261,1685541418672-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 13:56:59,277 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,39261,1685541418672-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 13:56:59,277 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,39261,1685541418672-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 13:56:59,277 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:39261, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 13:56:59,277 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-31 13:56:59,277 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-31 13:56:59,277 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 13:56:59,278 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-31 13:56:59,279 DEBUG [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-31 13:56:59,281 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 13:56:59,282 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 13:56:59,284 DEBUG [HFileArchiver-9] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/.tmp/data/hbase/namespace/1475049869caf7e067eb43bc572e3848 2023-05-31 13:56:59,284 DEBUG [HFileArchiver-9] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/.tmp/data/hbase/namespace/1475049869caf7e067eb43bc572e3848 empty. 2023-05-31 13:56:59,285 DEBUG [HFileArchiver-9] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/.tmp/data/hbase/namespace/1475049869caf7e067eb43bc572e3848 2023-05-31 13:56:59,285 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-31 13:56:59,295 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-31 13:56:59,296 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1475049869caf7e067eb43bc572e3848, NAME => 'hbase:namespace,,1685541419277.1475049869caf7e067eb43bc572e3848.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/.tmp 2023-05-31 13:56:59,302 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685541419277.1475049869caf7e067eb43bc572e3848.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:56:59,302 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 1475049869caf7e067eb43bc572e3848, disabling compactions & flushes 2023-05-31 13:56:59,302 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685541419277.1475049869caf7e067eb43bc572e3848. 2023-05-31 13:56:59,303 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685541419277.1475049869caf7e067eb43bc572e3848. 2023-05-31 13:56:59,303 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685541419277.1475049869caf7e067eb43bc572e3848. after waiting 0 ms 2023-05-31 13:56:59,303 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685541419277.1475049869caf7e067eb43bc572e3848. 2023-05-31 13:56:59,303 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685541419277.1475049869caf7e067eb43bc572e3848. 2023-05-31 13:56:59,303 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 1475049869caf7e067eb43bc572e3848: 2023-05-31 13:56:59,305 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 13:56:59,306 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685541419277.1475049869caf7e067eb43bc572e3848.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685541419306"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685541419306"}]},"ts":"1685541419306"} 2023-05-31 13:56:59,308 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 13:56:59,309 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 13:56:59,309 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685541419309"}]},"ts":"1685541419309"} 2023-05-31 13:56:59,311 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-31 13:56:59,315 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=1475049869caf7e067eb43bc572e3848, ASSIGN}] 2023-05-31 13:56:59,318 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=1475049869caf7e067eb43bc572e3848, ASSIGN 2023-05-31 13:56:59,319 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=1475049869caf7e067eb43bc572e3848, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,38551,1685541418717; forceNewPlan=false, retain=false 2023-05-31 13:56:59,470 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=1475049869caf7e067eb43bc572e3848, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:56:59,470 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685541419277.1475049869caf7e067eb43bc572e3848.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685541419470"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685541419470"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685541419470"}]},"ts":"1685541419470"} 2023-05-31 13:56:59,472 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 1475049869caf7e067eb43bc572e3848, server=jenkins-hbase17.apache.org,38551,1685541418717}] 2023-05-31 13:56:59,630 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685541419277.1475049869caf7e067eb43bc572e3848. 2023-05-31 13:56:59,630 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1475049869caf7e067eb43bc572e3848, NAME => 'hbase:namespace,,1685541419277.1475049869caf7e067eb43bc572e3848.', STARTKEY => '', ENDKEY => ''} 2023-05-31 13:56:59,630 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 1475049869caf7e067eb43bc572e3848 2023-05-31 13:56:59,630 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685541419277.1475049869caf7e067eb43bc572e3848.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:56:59,630 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1475049869caf7e067eb43bc572e3848 2023-05-31 13:56:59,630 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1475049869caf7e067eb43bc572e3848 2023-05-31 13:56:59,632 INFO [StoreOpener-1475049869caf7e067eb43bc572e3848-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1475049869caf7e067eb43bc572e3848 2023-05-31 13:56:59,633 DEBUG [StoreOpener-1475049869caf7e067eb43bc572e3848-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/namespace/1475049869caf7e067eb43bc572e3848/info 2023-05-31 13:56:59,633 DEBUG [StoreOpener-1475049869caf7e067eb43bc572e3848-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/namespace/1475049869caf7e067eb43bc572e3848/info 2023-05-31 13:56:59,634 INFO [StoreOpener-1475049869caf7e067eb43bc572e3848-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1475049869caf7e067eb43bc572e3848 columnFamilyName info 2023-05-31 13:56:59,634 INFO [StoreOpener-1475049869caf7e067eb43bc572e3848-1] regionserver.HStore(310): Store=1475049869caf7e067eb43bc572e3848/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:56:59,635 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/namespace/1475049869caf7e067eb43bc572e3848 2023-05-31 13:56:59,635 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/namespace/1475049869caf7e067eb43bc572e3848 2023-05-31 13:56:59,639 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1475049869caf7e067eb43bc572e3848 2023-05-31 13:56:59,642 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/namespace/1475049869caf7e067eb43bc572e3848/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 13:56:59,643 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1475049869caf7e067eb43bc572e3848; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=689110, jitterRate=-0.1237519383430481}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 13:56:59,643 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1475049869caf7e067eb43bc572e3848: 2023-05-31 13:56:59,648 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685541419277.1475049869caf7e067eb43bc572e3848., pid=6, masterSystemTime=1685541419624 2023-05-31 13:56:59,650 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685541419277.1475049869caf7e067eb43bc572e3848. 2023-05-31 13:56:59,650 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685541419277.1475049869caf7e067eb43bc572e3848. 2023-05-31 13:56:59,651 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=1475049869caf7e067eb43bc572e3848, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:56:59,651 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685541419277.1475049869caf7e067eb43bc572e3848.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685541419651"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685541419651"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685541419651"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685541419651"}]},"ts":"1685541419651"} 2023-05-31 13:56:59,657 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-31 13:56:59,657 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 1475049869caf7e067eb43bc572e3848, server=jenkins-hbase17.apache.org,38551,1685541418717 in 182 msec 2023-05-31 13:56:59,660 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-31 13:56:59,660 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=1475049869caf7e067eb43bc572e3848, ASSIGN in 342 msec 2023-05-31 13:56:59,660 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 13:56:59,661 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685541419660"}]},"ts":"1685541419660"} 2023-05-31 13:56:59,662 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-31 13:56:59,664 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 13:56:59,666 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 387 msec 2023-05-31 13:56:59,680 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-31 13:56:59,681 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-31 13:56:59,681 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:56:59,684 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-31 13:56:59,693 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 13:56:59,696 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-05-31 13:56:59,706 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-31 13:56:59,713 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 13:56:59,717 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-05-31 13:56:59,731 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-31 13:56:59,732 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-31 13:56:59,732 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.991sec 2023-05-31 13:56:59,733 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-31 13:56:59,733 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-31 13:56:59,733 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-31 13:56:59,733 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,39261,1685541418672-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-31 13:56:59,733 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,39261,1685541418672-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-31 13:56:59,735 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-31 13:56:59,739 DEBUG [Listener at localhost.localdomain/43373] zookeeper.ReadOnlyZKClient(139): Connect 0x78d0f774 to 127.0.0.1:61551 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 13:56:59,745 DEBUG [Listener at localhost.localdomain/43373] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3ddf9d80, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 13:56:59,747 DEBUG [hconnection-0x6bff79ce-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 13:56:59,749 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:35312, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 13:56:59,751 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase17.apache.org,39261,1685541418672 2023-05-31 13:56:59,752 INFO [Listener at localhost.localdomain/43373] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:56:59,755 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-31 13:56:59,755 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:56:59,756 INFO [Listener at localhost.localdomain/43373] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-31 13:56:59,759 DEBUG [Listener at localhost.localdomain/43373] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-31 13:56:59,763 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:43094, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-31 13:56:59,765 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-31 13:56:59,765 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-31 13:56:59,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 13:56:59,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRolling 2023-05-31 13:56:59,776 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 13:56:59,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(697): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRolling" procId is: 9 2023-05-31 13:56:59,778 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 13:56:59,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 13:56:59,780 DEBUG [HFileArchiver-10] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/.tmp/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d 2023-05-31 13:56:59,781 DEBUG [HFileArchiver-10] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/.tmp/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d empty. 2023-05-31 13:56:59,782 DEBUG [HFileArchiver-10] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/.tmp/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d 2023-05-31 13:56:59,782 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRolling regions 2023-05-31 13:56:59,794 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/.tmp/data/default/TestLogRolling-testLogRolling/.tabledesc/.tableinfo.0000000001 2023-05-31 13:56:59,795 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => c341035db8c45e6c5c51442cddc53e7d, NAME => 'TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/.tmp 2023-05-31 13:56:59,802 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:56:59,802 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1604): Closing c341035db8c45e6c5c51442cddc53e7d, disabling compactions & flushes 2023-05-31 13:56:59,802 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d. 2023-05-31 13:56:59,802 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d. 2023-05-31 13:56:59,802 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d. after waiting 0 ms 2023-05-31 13:56:59,802 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d. 2023-05-31 13:56:59,802 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d. 2023-05-31 13:56:59,802 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for c341035db8c45e6c5c51442cddc53e7d: 2023-05-31 13:56:59,805 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 13:56:59,806 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685541419805"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685541419805"}]},"ts":"1685541419805"} 2023-05-31 13:56:59,807 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 13:56:59,808 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 13:56:59,809 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685541419808"}]},"ts":"1685541419808"} 2023-05-31 13:56:59,810 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLING in hbase:meta 2023-05-31 13:56:59,813 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c341035db8c45e6c5c51442cddc53e7d, ASSIGN}] 2023-05-31 13:56:59,815 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c341035db8c45e6c5c51442cddc53e7d, ASSIGN 2023-05-31 13:56:59,815 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c341035db8c45e6c5c51442cddc53e7d, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,38551,1685541418717; forceNewPlan=false, retain=false 2023-05-31 13:56:59,967 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=c341035db8c45e6c5c51442cddc53e7d, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:56:59,968 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685541419967"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685541419967"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685541419967"}]},"ts":"1685541419967"} 2023-05-31 13:56:59,971 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure c341035db8c45e6c5c51442cddc53e7d, server=jenkins-hbase17.apache.org,38551,1685541418717}] 2023-05-31 13:57:00,133 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d. 2023-05-31 13:57:00,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c341035db8c45e6c5c51442cddc53e7d, NAME => 'TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d.', STARTKEY => '', ENDKEY => ''} 2023-05-31 13:57:00,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling c341035db8c45e6c5c51442cddc53e7d 2023-05-31 13:57:00,135 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:57:00,135 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for c341035db8c45e6c5c51442cddc53e7d 2023-05-31 13:57:00,135 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for c341035db8c45e6c5c51442cddc53e7d 2023-05-31 13:57:00,138 INFO [StoreOpener-c341035db8c45e6c5c51442cddc53e7d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c341035db8c45e6c5c51442cddc53e7d 2023-05-31 13:57:00,141 DEBUG [StoreOpener-c341035db8c45e6c5c51442cddc53e7d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info 2023-05-31 13:57:00,141 DEBUG [StoreOpener-c341035db8c45e6c5c51442cddc53e7d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info 2023-05-31 13:57:00,142 INFO [StoreOpener-c341035db8c45e6c5c51442cddc53e7d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c341035db8c45e6c5c51442cddc53e7d columnFamilyName info 2023-05-31 13:57:00,143 INFO [StoreOpener-c341035db8c45e6c5c51442cddc53e7d-1] regionserver.HStore(310): Store=c341035db8c45e6c5c51442cddc53e7d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:57:00,144 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d 2023-05-31 13:57:00,144 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d 2023-05-31 13:57:00,149 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for c341035db8c45e6c5c51442cddc53e7d 2023-05-31 13:57:00,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 13:57:00,153 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened c341035db8c45e6c5c51442cddc53e7d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=733543, jitterRate=-0.06725268065929413}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 13:57:00,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for c341035db8c45e6c5c51442cddc53e7d: 2023-05-31 13:57:00,155 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d., pid=11, masterSystemTime=1685541420125 2023-05-31 13:57:00,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d. 2023-05-31 13:57:00,157 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d. 2023-05-31 13:57:00,158 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=c341035db8c45e6c5c51442cddc53e7d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:57:00,158 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685541420158"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685541420158"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685541420158"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685541420158"}]},"ts":"1685541420158"} 2023-05-31 13:57:00,164 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-31 13:57:00,164 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure c341035db8c45e6c5c51442cddc53e7d, server=jenkins-hbase17.apache.org,38551,1685541418717 in 190 msec 2023-05-31 13:57:00,167 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-31 13:57:00,167 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c341035db8c45e6c5c51442cddc53e7d, ASSIGN in 351 msec 2023-05-31 13:57:00,168 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 13:57:00,168 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685541420168"}]},"ts":"1685541420168"} 2023-05-31 13:57:00,170 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLED in hbase:meta 2023-05-31 13:57:00,172 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 13:57:00,174 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRolling in 406 msec 2023-05-31 13:57:02,945 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-31 13:57:04,973 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-31 13:57:04,975 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-31 13:57:04,977 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRolling' 2023-05-31 13:57:09,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39261] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 13:57:09,780 INFO [Listener at localhost.localdomain/43373] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRolling, procId: 9 completed 2023-05-31 13:57:09,783 DEBUG [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRolling 2023-05-31 13:57:09,783 DEBUG [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d. 2023-05-31 13:57:09,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] regionserver.HRegion(9158): Flush requested on c341035db8c45e6c5c51442cddc53e7d 2023-05-31 13:57:09,801 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c341035db8c45e6c5c51442cddc53e7d 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 13:57:09,816 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/.tmp/info/ac9836502caf491498f9cd66e4e5f28f 2023-05-31 13:57:09,825 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/.tmp/info/ac9836502caf491498f9cd66e4e5f28f as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/ac9836502caf491498f9cd66e4e5f28f 2023-05-31 13:57:09,831 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/ac9836502caf491498f9cd66e4e5f28f, entries=7, sequenceid=11, filesize=12.1 K 2023-05-31 13:57:09,831 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=18.91 KB/19368 for c341035db8c45e6c5c51442cddc53e7d in 30ms, sequenceid=11, compaction requested=false 2023-05-31 13:57:09,832 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c341035db8c45e6c5c51442cddc53e7d: 2023-05-31 13:57:09,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] regionserver.HRegion(9158): Flush requested on c341035db8c45e6c5c51442cddc53e7d 2023-05-31 13:57:09,833 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c341035db8c45e6c5c51442cddc53e7d 1/1 column families, dataSize=19.96 KB heapSize=21.63 KB 2023-05-31 13:57:09,842 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=19.96 KB at sequenceid=33 (bloomFilter=true), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/.tmp/info/7e16d81fa0b04ebfaf4cf351164a104a 2023-05-31 13:57:09,850 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/.tmp/info/7e16d81fa0b04ebfaf4cf351164a104a as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/7e16d81fa0b04ebfaf4cf351164a104a 2023-05-31 13:57:09,856 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/7e16d81fa0b04ebfaf4cf351164a104a, entries=19, sequenceid=33, filesize=24.7 K 2023-05-31 13:57:09,857 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~19.96 KB/20444, heapSize ~21.61 KB/22128, currentSize=6.30 KB/6456 for c341035db8c45e6c5c51442cddc53e7d in 24ms, sequenceid=33, compaction requested=false 2023-05-31 13:57:09,857 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c341035db8c45e6c5c51442cddc53e7d: 2023-05-31 13:57:09,857 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=36.9 K, sizeToCheck=16.0 K 2023-05-31 13:57:09,857 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 13:57:09,857 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/7e16d81fa0b04ebfaf4cf351164a104a because midkey is the same as first or last row 2023-05-31 13:57:11,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] regionserver.HRegion(9158): Flush requested on c341035db8c45e6c5c51442cddc53e7d 2023-05-31 13:57:11,844 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c341035db8c45e6c5c51442cddc53e7d 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 13:57:11,860 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=43 (bloomFilter=true), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/.tmp/info/153c31b7329b4baca6ada409a5102486 2023-05-31 13:57:11,867 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/.tmp/info/153c31b7329b4baca6ada409a5102486 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/153c31b7329b4baca6ada409a5102486 2023-05-31 13:57:11,874 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/153c31b7329b4baca6ada409a5102486, entries=7, sequenceid=43, filesize=12.1 K 2023-05-31 13:57:11,875 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=16.81 KB/17216 for c341035db8c45e6c5c51442cddc53e7d in 31ms, sequenceid=43, compaction requested=true 2023-05-31 13:57:11,875 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c341035db8c45e6c5c51442cddc53e7d: 2023-05-31 13:57:11,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] regionserver.HRegion(9158): Flush requested on c341035db8c45e6c5c51442cddc53e7d 2023-05-31 13:57:11,875 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=49.0 K, sizeToCheck=16.0 K 2023-05-31 13:57:11,875 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 13:57:11,875 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/7e16d81fa0b04ebfaf4cf351164a104a because midkey is the same as first or last row 2023-05-31 13:57:11,875 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 13:57:11,876 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 13:57:11,876 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c341035db8c45e6c5c51442cddc53e7d 1/1 column families, dataSize=17.86 KB heapSize=19.38 KB 2023-05-31 13:57:11,877 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 50141 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 13:57:11,878 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1912): c341035db8c45e6c5c51442cddc53e7d/info is initiating minor compaction (all files) 2023-05-31 13:57:11,878 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of c341035db8c45e6c5c51442cddc53e7d/info in TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d. 2023-05-31 13:57:11,878 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/ac9836502caf491498f9cd66e4e5f28f, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/7e16d81fa0b04ebfaf4cf351164a104a, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/153c31b7329b4baca6ada409a5102486] into tmpdir=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/.tmp, totalSize=49.0 K 2023-05-31 13:57:11,879 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting ac9836502caf491498f9cd66e4e5f28f, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1685541429789 2023-05-31 13:57:11,881 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting 7e16d81fa0b04ebfaf4cf351164a104a, keycount=19, bloomtype=ROW, size=24.7 K, encoding=NONE, compression=NONE, seqNum=33, earliestPutTs=1685541429802 2023-05-31 13:57:11,882 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting 153c31b7329b4baca6ada409a5102486, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=43, earliestPutTs=1685541429833 2023-05-31 13:57:11,896 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=18.91 KB at sequenceid=64 (bloomFilter=true), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/.tmp/info/a1cb87e460074ee2bbc6dae4dc21c0ef 2023-05-31 13:57:11,900 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] throttle.PressureAwareThroughputController(145): c341035db8c45e6c5c51442cddc53e7d#info#compaction#29 average throughput is 16.93 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 13:57:11,902 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=c341035db8c45e6c5c51442cddc53e7d, server=jenkins-hbase17.apache.org,38551,1685541418717 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-31 13:57:11,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] ipc.CallRunner(144): callId: 71 service: ClientService methodName: Mutate size: 1.2 K connection: 136.243.18.41:35312 deadline: 1685541441902, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=c341035db8c45e6c5c51442cddc53e7d, server=jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:57:11,904 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/.tmp/info/a1cb87e460074ee2bbc6dae4dc21c0ef as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/a1cb87e460074ee2bbc6dae4dc21c0ef 2023-05-31 13:57:11,919 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/a1cb87e460074ee2bbc6dae4dc21c0ef, entries=18, sequenceid=64, filesize=23.7 K 2023-05-31 13:57:11,920 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~18.91 KB/19368, heapSize ~20.48 KB/20976, currentSize=11.56 KB/11836 for c341035db8c45e6c5c51442cddc53e7d in 44ms, sequenceid=64, compaction requested=false 2023-05-31 13:57:11,920 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c341035db8c45e6c5c51442cddc53e7d: 2023-05-31 13:57:11,920 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=72.7 K, sizeToCheck=16.0 K 2023-05-31 13:57:11,920 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 13:57:11,921 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/7e16d81fa0b04ebfaf4cf351164a104a because midkey is the same as first or last row 2023-05-31 13:57:11,922 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/.tmp/info/71f0a20a207740edaf3c375c87739602 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/71f0a20a207740edaf3c375c87739602 2023-05-31 13:57:11,928 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in c341035db8c45e6c5c51442cddc53e7d/info of c341035db8c45e6c5c51442cddc53e7d into 71f0a20a207740edaf3c375c87739602(size=39.6 K), total size for store is 63.3 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 13:57:11,928 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for c341035db8c45e6c5c51442cddc53e7d: 2023-05-31 13:57:11,928 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d., storeName=c341035db8c45e6c5c51442cddc53e7d/info, priority=13, startTime=1685541431875; duration=0sec 2023-05-31 13:57:11,929 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=63.3 K, sizeToCheck=16.0 K 2023-05-31 13:57:11,929 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 13:57:11,929 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/71f0a20a207740edaf3c375c87739602 because midkey is the same as first or last row 2023-05-31 13:57:11,929 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 13:57:21,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] regionserver.HRegion(9158): Flush requested on c341035db8c45e6c5c51442cddc53e7d 2023-05-31 13:57:21,981 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c341035db8c45e6c5c51442cddc53e7d 1/1 column families, dataSize=12.61 KB heapSize=13.75 KB 2023-05-31 13:57:21,998 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=12.61 KB at sequenceid=80 (bloomFilter=true), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/.tmp/info/af5c799b1e1c471584ab47288871a3b9 2023-05-31 13:57:22,005 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/.tmp/info/af5c799b1e1c471584ab47288871a3b9 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/af5c799b1e1c471584ab47288871a3b9 2023-05-31 13:57:22,014 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/af5c799b1e1c471584ab47288871a3b9, entries=12, sequenceid=80, filesize=17.4 K 2023-05-31 13:57:22,015 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~12.61 KB/12912, heapSize ~13.73 KB/14064, currentSize=1.05 KB/1076 for c341035db8c45e6c5c51442cddc53e7d in 34ms, sequenceid=80, compaction requested=true 2023-05-31 13:57:22,015 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c341035db8c45e6c5c51442cddc53e7d: 2023-05-31 13:57:22,015 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=80.7 K, sizeToCheck=16.0 K 2023-05-31 13:57:22,015 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 13:57:22,015 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/71f0a20a207740edaf3c375c87739602 because midkey is the same as first or last row 2023-05-31 13:57:22,015 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 13:57:22,015 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 13:57:22,017 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 82610 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 13:57:22,017 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1912): c341035db8c45e6c5c51442cddc53e7d/info is initiating minor compaction (all files) 2023-05-31 13:57:22,017 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of c341035db8c45e6c5c51442cddc53e7d/info in TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d. 2023-05-31 13:57:22,017 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/71f0a20a207740edaf3c375c87739602, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/a1cb87e460074ee2bbc6dae4dc21c0ef, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/af5c799b1e1c471584ab47288871a3b9] into tmpdir=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/.tmp, totalSize=80.7 K 2023-05-31 13:57:22,017 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting 71f0a20a207740edaf3c375c87739602, keycount=33, bloomtype=ROW, size=39.6 K, encoding=NONE, compression=NONE, seqNum=43, earliestPutTs=1685541429789 2023-05-31 13:57:22,018 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting a1cb87e460074ee2bbc6dae4dc21c0ef, keycount=18, bloomtype=ROW, size=23.7 K, encoding=NONE, compression=NONE, seqNum=64, earliestPutTs=1685541431845 2023-05-31 13:57:22,018 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting af5c799b1e1c471584ab47288871a3b9, keycount=12, bloomtype=ROW, size=17.4 K, encoding=NONE, compression=NONE, seqNum=80, earliestPutTs=1685541431878 2023-05-31 13:57:22,029 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] throttle.PressureAwareThroughputController(145): c341035db8c45e6c5c51442cddc53e7d#info#compaction#31 average throughput is 32.32 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 13:57:22,046 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/.tmp/info/6631af04418a45789a1c559506de7b77 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/6631af04418a45789a1c559506de7b77 2023-05-31 13:57:22,053 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in c341035db8c45e6c5c51442cddc53e7d/info of c341035db8c45e6c5c51442cddc53e7d into 6631af04418a45789a1c559506de7b77(size=71.4 K), total size for store is 71.4 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 13:57:22,053 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for c341035db8c45e6c5c51442cddc53e7d: 2023-05-31 13:57:22,053 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d., storeName=c341035db8c45e6c5c51442cddc53e7d/info, priority=13, startTime=1685541442015; duration=0sec 2023-05-31 13:57:22,054 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=71.4 K, sizeToCheck=16.0 K 2023-05-31 13:57:22,054 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 13:57:22,054 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.CompactSplit(227): Splitting TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d., compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 13:57:22,054 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 13:57:22,055 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39261] assignment.AssignmentManager(1140): Split request from jenkins-hbase17.apache.org,38551,1685541418717, parent={ENCODED => c341035db8c45e6c5c51442cddc53e7d, NAME => 'TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d.', STARTKEY => '', ENDKEY => ''} splitKey=row0062 2023-05-31 13:57:22,061 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39261] assignment.SplitTableRegionProcedure(219): Splittable=true state=OPEN, location=jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:57:22,066 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39261] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c341035db8c45e6c5c51442cddc53e7d, daughterA=6a39871884e9043af0b948703cfa5d61, daughterB=65907f62a09743fab59d807d3dccece2 2023-05-31 13:57:22,067 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c341035db8c45e6c5c51442cddc53e7d, daughterA=6a39871884e9043af0b948703cfa5d61, daughterB=65907f62a09743fab59d807d3dccece2 2023-05-31 13:57:22,067 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c341035db8c45e6c5c51442cddc53e7d, daughterA=6a39871884e9043af0b948703cfa5d61, daughterB=65907f62a09743fab59d807d3dccece2 2023-05-31 13:57:22,067 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c341035db8c45e6c5c51442cddc53e7d, daughterA=6a39871884e9043af0b948703cfa5d61, daughterB=65907f62a09743fab59d807d3dccece2 2023-05-31 13:57:22,076 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c341035db8c45e6c5c51442cddc53e7d, UNASSIGN}] 2023-05-31 13:57:22,078 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c341035db8c45e6c5c51442cddc53e7d, UNASSIGN 2023-05-31 13:57:22,079 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=c341035db8c45e6c5c51442cddc53e7d, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:57:22,079 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685541442079"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685541442079"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685541442079"}]},"ts":"1685541442079"} 2023-05-31 13:57:22,081 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; CloseRegionProcedure c341035db8c45e6c5c51442cddc53e7d, server=jenkins-hbase17.apache.org,38551,1685541418717}] 2023-05-31 13:57:22,242 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close c341035db8c45e6c5c51442cddc53e7d 2023-05-31 13:57:22,242 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing c341035db8c45e6c5c51442cddc53e7d, disabling compactions & flushes 2023-05-31 13:57:22,242 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d. 2023-05-31 13:57:22,242 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d. 2023-05-31 13:57:22,243 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d. after waiting 0 ms 2023-05-31 13:57:22,243 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d. 2023-05-31 13:57:22,243 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing c341035db8c45e6c5c51442cddc53e7d 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-31 13:57:22,257 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=85 (bloomFilter=true), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/.tmp/info/3e500bcc0d9e42589b3135ed4e1bac84 2023-05-31 13:57:22,263 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/.tmp/info/3e500bcc0d9e42589b3135ed4e1bac84 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/3e500bcc0d9e42589b3135ed4e1bac84 2023-05-31 13:57:22,268 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/3e500bcc0d9e42589b3135ed4e1bac84, entries=1, sequenceid=85, filesize=5.8 K 2023-05-31 13:57:22,269 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for c341035db8c45e6c5c51442cddc53e7d in 26ms, sequenceid=85, compaction requested=false 2023-05-31 13:57:22,285 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/ac9836502caf491498f9cd66e4e5f28f, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/7e16d81fa0b04ebfaf4cf351164a104a, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/71f0a20a207740edaf3c375c87739602, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/153c31b7329b4baca6ada409a5102486, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/a1cb87e460074ee2bbc6dae4dc21c0ef, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/af5c799b1e1c471584ab47288871a3b9] to archive 2023-05-31 13:57:22,286 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-31 13:57:22,288 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/ac9836502caf491498f9cd66e4e5f28f to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/ac9836502caf491498f9cd66e4e5f28f 2023-05-31 13:57:22,290 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/7e16d81fa0b04ebfaf4cf351164a104a to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/7e16d81fa0b04ebfaf4cf351164a104a 2023-05-31 13:57:22,293 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/71f0a20a207740edaf3c375c87739602 to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/71f0a20a207740edaf3c375c87739602 2023-05-31 13:57:22,297 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/153c31b7329b4baca6ada409a5102486 to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/153c31b7329b4baca6ada409a5102486 2023-05-31 13:57:22,298 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/a1cb87e460074ee2bbc6dae4dc21c0ef to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/a1cb87e460074ee2bbc6dae4dc21c0ef 2023-05-31 13:57:22,299 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/af5c799b1e1c471584ab47288871a3b9 to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/af5c799b1e1c471584ab47288871a3b9 2023-05-31 13:57:22,317 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/recovered.edits/88.seqid, newMaxSeqId=88, maxSeqId=1 2023-05-31 13:57:22,318 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d. 2023-05-31 13:57:22,318 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for c341035db8c45e6c5c51442cddc53e7d: 2023-05-31 13:57:22,321 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed c341035db8c45e6c5c51442cddc53e7d 2023-05-31 13:57:22,321 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=c341035db8c45e6c5c51442cddc53e7d, regionState=CLOSED 2023-05-31 13:57:22,321 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685541442321"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685541442321"}]},"ts":"1685541442321"} 2023-05-31 13:57:22,325 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-05-31 13:57:22,325 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; CloseRegionProcedure c341035db8c45e6c5c51442cddc53e7d, server=jenkins-hbase17.apache.org,38551,1685541418717 in 242 msec 2023-05-31 13:57:22,329 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-05-31 13:57:22,329 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c341035db8c45e6c5c51442cddc53e7d, UNASSIGN in 249 msec 2023-05-31 13:57:22,347 INFO [PEWorker-1] assignment.SplitTableRegionProcedure(694): pid=12 splitting 2 storefiles, region=c341035db8c45e6c5c51442cddc53e7d, threads=2 2023-05-31 13:57:22,348 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/3e500bcc0d9e42589b3135ed4e1bac84 for region: c341035db8c45e6c5c51442cddc53e7d 2023-05-31 13:57:22,348 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/6631af04418a45789a1c559506de7b77 for region: c341035db8c45e6c5c51442cddc53e7d 2023-05-31 13:57:22,359 DEBUG [StoreFileSplitter-pool-0] regionserver.HRegionFileSystem(700): Will create HFileLink file for hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/3e500bcc0d9e42589b3135ed4e1bac84, top=true 2023-05-31 13:57:22,363 INFO [StoreFileSplitter-pool-0] regionserver.HRegionFileSystem(742): Created linkFile:hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/.splits/65907f62a09743fab59d807d3dccece2/info/TestLogRolling-testLogRolling=c341035db8c45e6c5c51442cddc53e7d-3e500bcc0d9e42589b3135ed4e1bac84 for child: 65907f62a09743fab59d807d3dccece2, parent: c341035db8c45e6c5c51442cddc53e7d 2023-05-31 13:57:22,363 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/3e500bcc0d9e42589b3135ed4e1bac84 for region: c341035db8c45e6c5c51442cddc53e7d 2023-05-31 13:57:22,376 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/6631af04418a45789a1c559506de7b77 for region: c341035db8c45e6c5c51442cddc53e7d 2023-05-31 13:57:22,376 DEBUG [PEWorker-1] assignment.SplitTableRegionProcedure(755): pid=12 split storefiles for region c341035db8c45e6c5c51442cddc53e7d Daughter A: 1 storefiles, Daughter B: 2 storefiles. 2023-05-31 13:57:22,410 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/6a39871884e9043af0b948703cfa5d61/recovered.edits/88.seqid, newMaxSeqId=88, maxSeqId=-1 2023-05-31 13:57:22,411 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/recovered.edits/88.seqid, newMaxSeqId=88, maxSeqId=-1 2023-05-31 13:57:22,414 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685541442413"},{"qualifier":"splitA","vlen":70,"tag":[],"timestamp":"1685541442413"},{"qualifier":"splitB","vlen":70,"tag":[],"timestamp":"1685541442413"}]},"ts":"1685541442413"} 2023-05-31 13:57:22,414 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685541442061.6a39871884e9043af0b948703cfa5d61.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685541442413"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685541442413"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685541442413"}]},"ts":"1685541442413"} 2023-05-31 13:57:22,414 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685541442413"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685541442413"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685541442413"}]},"ts":"1685541442413"} 2023-05-31 13:57:22,464 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38551] regionserver.HRegion(9158): Flush requested on 1588230740 2023-05-31 13:57:22,464 DEBUG [MemStoreFlusher.0] regionserver.FlushAllLargeStoresPolicy(69): Since none of the CFs were above the size, flushing all. 2023-05-31 13:57:22,464 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.82 KB heapSize=8.36 KB 2023-05-31 13:57:22,478 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=6a39871884e9043af0b948703cfa5d61, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=65907f62a09743fab59d807d3dccece2, ASSIGN}] 2023-05-31 13:57:22,480 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=65907f62a09743fab59d807d3dccece2, ASSIGN 2023-05-31 13:57:22,480 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=6a39871884e9043af0b948703cfa5d61, ASSIGN 2023-05-31 13:57:22,481 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=6a39871884e9043af0b948703cfa5d61, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase17.apache.org,38551,1685541418717; forceNewPlan=false, retain=false 2023-05-31 13:57:22,481 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=65907f62a09743fab59d807d3dccece2, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase17.apache.org,38551,1685541418717; forceNewPlan=false, retain=false 2023-05-31 13:57:22,485 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.61 KB at sequenceid=17 (bloomFilter=false), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740/.tmp/info/c32a499f44c8415f8b5dcc08d0630872 2023-05-31 13:57:22,509 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=216 B at sequenceid=17 (bloomFilter=false), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740/.tmp/table/0b9e642bbde64f30b12393af559e94d0 2023-05-31 13:57:22,516 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740/.tmp/info/c32a499f44c8415f8b5dcc08d0630872 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740/info/c32a499f44c8415f8b5dcc08d0630872 2023-05-31 13:57:22,522 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740/info/c32a499f44c8415f8b5dcc08d0630872, entries=29, sequenceid=17, filesize=8.6 K 2023-05-31 13:57:22,523 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740/.tmp/table/0b9e642bbde64f30b12393af559e94d0 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740/table/0b9e642bbde64f30b12393af559e94d0 2023-05-31 13:57:22,529 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740/table/0b9e642bbde64f30b12393af559e94d0, entries=4, sequenceid=17, filesize=4.8 K 2023-05-31 13:57:22,530 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~4.82 KB/4939, heapSize ~8.08 KB/8272, currentSize=0 B/0 for 1588230740 in 66ms, sequenceid=17, compaction requested=false 2023-05-31 13:57:22,531 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-31 13:57:22,634 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=6a39871884e9043af0b948703cfa5d61, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:57:22,634 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=65907f62a09743fab59d807d3dccece2, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:57:22,634 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685541442061.6a39871884e9043af0b948703cfa5d61.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685541442634"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685541442634"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685541442634"}]},"ts":"1685541442634"} 2023-05-31 13:57:22,634 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685541442634"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685541442634"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685541442634"}]},"ts":"1685541442634"} 2023-05-31 13:57:22,637 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE; OpenRegionProcedure 6a39871884e9043af0b948703cfa5d61, server=jenkins-hbase17.apache.org,38551,1685541418717}] 2023-05-31 13:57:22,638 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=16, state=RUNNABLE; OpenRegionProcedure 65907f62a09743fab59d807d3dccece2, server=jenkins-hbase17.apache.org,38551,1685541418717}] 2023-05-31 13:57:22,793 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1685541442061.6a39871884e9043af0b948703cfa5d61. 2023-05-31 13:57:22,793 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6a39871884e9043af0b948703cfa5d61, NAME => 'TestLogRolling-testLogRolling,,1685541442061.6a39871884e9043af0b948703cfa5d61.', STARTKEY => '', ENDKEY => 'row0062'} 2023-05-31 13:57:22,793 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 6a39871884e9043af0b948703cfa5d61 2023-05-31 13:57:22,794 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685541442061.6a39871884e9043af0b948703cfa5d61.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:57:22,794 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 6a39871884e9043af0b948703cfa5d61 2023-05-31 13:57:22,794 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 6a39871884e9043af0b948703cfa5d61 2023-05-31 13:57:22,795 INFO [StoreOpener-6a39871884e9043af0b948703cfa5d61-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 6a39871884e9043af0b948703cfa5d61 2023-05-31 13:57:22,796 DEBUG [StoreOpener-6a39871884e9043af0b948703cfa5d61-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/6a39871884e9043af0b948703cfa5d61/info 2023-05-31 13:57:22,796 DEBUG [StoreOpener-6a39871884e9043af0b948703cfa5d61-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/6a39871884e9043af0b948703cfa5d61/info 2023-05-31 13:57:22,796 INFO [StoreOpener-6a39871884e9043af0b948703cfa5d61-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6a39871884e9043af0b948703cfa5d61 columnFamilyName info 2023-05-31 13:57:22,816 DEBUG [StoreOpener-6a39871884e9043af0b948703cfa5d61-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/6a39871884e9043af0b948703cfa5d61/info/6631af04418a45789a1c559506de7b77.c341035db8c45e6c5c51442cddc53e7d->hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/6631af04418a45789a1c559506de7b77-bottom 2023-05-31 13:57:22,817 INFO [StoreOpener-6a39871884e9043af0b948703cfa5d61-1] regionserver.HStore(310): Store=6a39871884e9043af0b948703cfa5d61/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:57:22,818 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/6a39871884e9043af0b948703cfa5d61 2023-05-31 13:57:22,819 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/6a39871884e9043af0b948703cfa5d61 2023-05-31 13:57:22,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 6a39871884e9043af0b948703cfa5d61 2023-05-31 13:57:22,822 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 6a39871884e9043af0b948703cfa5d61; next sequenceid=89; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=810066, jitterRate=0.030052974820137024}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 13:57:22,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 6a39871884e9043af0b948703cfa5d61: 2023-05-31 13:57:22,823 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1685541442061.6a39871884e9043af0b948703cfa5d61., pid=17, masterSystemTime=1685541442790 2023-05-31 13:57:22,823 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 13:57:22,824 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 1 store files, 0 compacting, 1 eligible, 16 blocking 2023-05-31 13:57:22,825 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,,1685541442061.6a39871884e9043af0b948703cfa5d61. 2023-05-31 13:57:22,825 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1912): 6a39871884e9043af0b948703cfa5d61/info is initiating minor compaction (all files) 2023-05-31 13:57:22,825 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 6a39871884e9043af0b948703cfa5d61/info in TestLogRolling-testLogRolling,,1685541442061.6a39871884e9043af0b948703cfa5d61. 2023-05-31 13:57:22,825 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/6a39871884e9043af0b948703cfa5d61/info/6631af04418a45789a1c559506de7b77.c341035db8c45e6c5c51442cddc53e7d->hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/6631af04418a45789a1c559506de7b77-bottom] into tmpdir=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/6a39871884e9043af0b948703cfa5d61/.tmp, totalSize=71.4 K 2023-05-31 13:57:22,826 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting 6631af04418a45789a1c559506de7b77.c341035db8c45e6c5c51442cddc53e7d, keycount=31, bloomtype=ROW, size=71.4 K, encoding=NONE, compression=NONE, seqNum=80, earliestPutTs=1685541429789 2023-05-31 13:57:22,826 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1685541442061.6a39871884e9043af0b948703cfa5d61. 2023-05-31 13:57:22,826 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1685541442061.6a39871884e9043af0b948703cfa5d61. 2023-05-31 13:57:22,826 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2. 2023-05-31 13:57:22,826 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 65907f62a09743fab59d807d3dccece2, NAME => 'TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.', STARTKEY => 'row0062', ENDKEY => ''} 2023-05-31 13:57:22,826 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 65907f62a09743fab59d807d3dccece2 2023-05-31 13:57:22,826 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:57:22,826 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 65907f62a09743fab59d807d3dccece2 2023-05-31 13:57:22,826 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 65907f62a09743fab59d807d3dccece2 2023-05-31 13:57:22,826 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=6a39871884e9043af0b948703cfa5d61, regionState=OPEN, openSeqNum=89, regionLocation=jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:57:22,827 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1685541442061.6a39871884e9043af0b948703cfa5d61.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685541442826"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685541442826"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685541442826"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685541442826"}]},"ts":"1685541442826"} 2023-05-31 13:57:22,827 INFO [StoreOpener-65907f62a09743fab59d807d3dccece2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 65907f62a09743fab59d807d3dccece2 2023-05-31 13:57:22,828 DEBUG [StoreOpener-65907f62a09743fab59d807d3dccece2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info 2023-05-31 13:57:22,828 DEBUG [StoreOpener-65907f62a09743fab59d807d3dccece2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info 2023-05-31 13:57:22,829 INFO [StoreOpener-65907f62a09743fab59d807d3dccece2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 65907f62a09743fab59d807d3dccece2 columnFamilyName info 2023-05-31 13:57:22,830 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-05-31 13:57:22,830 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; OpenRegionProcedure 6a39871884e9043af0b948703cfa5d61, server=jenkins-hbase17.apache.org,38551,1685541418717 in 191 msec 2023-05-31 13:57:22,833 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] throttle.PressureAwareThroughputController(145): 6a39871884e9043af0b948703cfa5d61#info#compaction#35 average throughput is 20.87 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 13:57:22,833 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=6a39871884e9043af0b948703cfa5d61, ASSIGN in 352 msec 2023-05-31 13:57:22,851 DEBUG [StoreOpener-65907f62a09743fab59d807d3dccece2-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/6631af04418a45789a1c559506de7b77.c341035db8c45e6c5c51442cddc53e7d->hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/6631af04418a45789a1c559506de7b77-top 2023-05-31 13:57:22,856 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/6a39871884e9043af0b948703cfa5d61/.tmp/info/e9c6ac86327e4463ba3a0785f1c58ad7 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/6a39871884e9043af0b948703cfa5d61/info/e9c6ac86327e4463ba3a0785f1c58ad7 2023-05-31 13:57:22,858 DEBUG [StoreOpener-65907f62a09743fab59d807d3dccece2-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/TestLogRolling-testLogRolling=c341035db8c45e6c5c51442cddc53e7d-3e500bcc0d9e42589b3135ed4e1bac84 2023-05-31 13:57:22,858 INFO [StoreOpener-65907f62a09743fab59d807d3dccece2-1] regionserver.HStore(310): Store=65907f62a09743fab59d807d3dccece2/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:57:22,859 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2 2023-05-31 13:57:22,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2 2023-05-31 13:57:22,863 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 1 (all) file(s) in 6a39871884e9043af0b948703cfa5d61/info of 6a39871884e9043af0b948703cfa5d61 into e9c6ac86327e4463ba3a0785f1c58ad7(size=69.1 K), total size for store is 69.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 13:57:22,863 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 6a39871884e9043af0b948703cfa5d61: 2023-05-31 13:57:22,863 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685541442061.6a39871884e9043af0b948703cfa5d61., storeName=6a39871884e9043af0b948703cfa5d61/info, priority=15, startTime=1685541442823; duration=0sec 2023-05-31 13:57:22,863 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 13:57:22,864 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 65907f62a09743fab59d807d3dccece2 2023-05-31 13:57:22,864 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 65907f62a09743fab59d807d3dccece2; next sequenceid=89; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=862586, jitterRate=0.09683537483215332}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 13:57:22,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:57:22,865 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2., pid=18, masterSystemTime=1685541442790 2023-05-31 13:57:22,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 13:57:22,867 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 2 store files, 0 compacting, 2 eligible, 16 blocking 2023-05-31 13:57:22,868 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2. 2023-05-31 13:57:22,868 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1912): 65907f62a09743fab59d807d3dccece2/info is initiating minor compaction (all files) 2023-05-31 13:57:22,869 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 65907f62a09743fab59d807d3dccece2/info in TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2. 2023-05-31 13:57:22,869 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/6631af04418a45789a1c559506de7b77.c341035db8c45e6c5c51442cddc53e7d->hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/6631af04418a45789a1c559506de7b77-top, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/TestLogRolling-testLogRolling=c341035db8c45e6c5c51442cddc53e7d-3e500bcc0d9e42589b3135ed4e1bac84] into tmpdir=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp, totalSize=77.2 K 2023-05-31 13:57:22,869 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting 6631af04418a45789a1c559506de7b77.c341035db8c45e6c5c51442cddc53e7d, keycount=31, bloomtype=ROW, size=71.4 K, encoding=NONE, compression=NONE, seqNum=81, earliestPutTs=1685541429789 2023-05-31 13:57:22,870 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting TestLogRolling-testLogRolling=c341035db8c45e6c5c51442cddc53e7d-3e500bcc0d9e42589b3135ed4e1bac84, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=85, earliestPutTs=1685541441982 2023-05-31 13:57:22,870 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2. 2023-05-31 13:57:22,870 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2. 2023-05-31 13:57:22,871 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=65907f62a09743fab59d807d3dccece2, regionState=OPEN, openSeqNum=89, regionLocation=jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:57:22,871 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685541442871"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685541442871"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685541442871"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685541442871"}]},"ts":"1685541442871"} 2023-05-31 13:57:22,875 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=16 2023-05-31 13:57:22,875 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=16, state=SUCCESS; OpenRegionProcedure 65907f62a09743fab59d807d3dccece2, server=jenkins-hbase17.apache.org,38551,1685541418717 in 235 msec 2023-05-31 13:57:22,877 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=12 2023-05-31 13:57:22,877 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=65907f62a09743fab59d807d3dccece2, ASSIGN in 397 msec 2023-05-31 13:57:22,878 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] throttle.PressureAwareThroughputController(145): 65907f62a09743fab59d807d3dccece2#info#compaction#36 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 13:57:22,879 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c341035db8c45e6c5c51442cddc53e7d, daughterA=6a39871884e9043af0b948703cfa5d61, daughterB=65907f62a09743fab59d807d3dccece2 in 816 msec 2023-05-31 13:57:23,300 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/8b549f8e365e44a8a01e607162dc8158 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/8b549f8e365e44a8a01e607162dc8158 2023-05-31 13:57:23,306 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 2 (all) file(s) in 65907f62a09743fab59d807d3dccece2/info of 65907f62a09743fab59d807d3dccece2 into 8b549f8e365e44a8a01e607162dc8158(size=8.1 K), total size for store is 8.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 13:57:23,307 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:57:23,307 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2., storeName=65907f62a09743fab59d807d3dccece2/info, priority=14, startTime=1685541442865; duration=0sec 2023-05-31 13:57:23,307 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 13:57:23,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] ipc.CallRunner(144): callId: 75 service: ClientService methodName: Mutate size: 1.2 K connection: 136.243.18.41:35312 deadline: 1685541453986, exception=org.apache.hadoop.hbase.NotServingRegionException: TestLogRolling-testLogRolling,,1685541419765.c341035db8c45e6c5c51442cddc53e7d. is not online on jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:57:27,876 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-31 13:57:34,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] regionserver.HRegion(9158): Flush requested on 65907f62a09743fab59d807d3dccece2 2023-05-31 13:57:34,063 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 65907f62a09743fab59d807d3dccece2 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 13:57:34,073 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=99 (bloomFilter=true), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/da56ff19d7c34219be6a476f25a60c10 2023-05-31 13:57:34,080 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/da56ff19d7c34219be6a476f25a60c10 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/da56ff19d7c34219be6a476f25a60c10 2023-05-31 13:57:34,085 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/da56ff19d7c34219be6a476f25a60c10, entries=7, sequenceid=99, filesize=12.1 K 2023-05-31 13:57:34,086 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=17.86 KB/18292 for 65907f62a09743fab59d807d3dccece2 in 23ms, sequenceid=99, compaction requested=false 2023-05-31 13:57:34,086 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:57:34,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] regionserver.HRegion(9158): Flush requested on 65907f62a09743fab59d807d3dccece2 2023-05-31 13:57:34,087 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 65907f62a09743fab59d807d3dccece2 1/1 column families, dataSize=18.91 KB heapSize=20.50 KB 2023-05-31 13:57:34,097 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=18.91 KB at sequenceid=120 (bloomFilter=true), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/a951f0761b7d4e46bea9380b64bd2c30 2023-05-31 13:57:34,102 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/a951f0761b7d4e46bea9380b64bd2c30 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/a951f0761b7d4e46bea9380b64bd2c30 2023-05-31 13:57:34,107 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/a951f0761b7d4e46bea9380b64bd2c30, entries=18, sequenceid=120, filesize=23.7 K 2023-05-31 13:57:34,108 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~18.91 KB/19368, heapSize ~20.48 KB/20976, currentSize=7.36 KB/7532 for 65907f62a09743fab59d807d3dccece2 in 21ms, sequenceid=120, compaction requested=true 2023-05-31 13:57:34,108 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:57:34,108 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 13:57:34,108 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 13:57:34,109 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 44914 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 13:57:34,109 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1912): 65907f62a09743fab59d807d3dccece2/info is initiating minor compaction (all files) 2023-05-31 13:57:34,110 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 65907f62a09743fab59d807d3dccece2/info in TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2. 2023-05-31 13:57:34,110 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/8b549f8e365e44a8a01e607162dc8158, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/da56ff19d7c34219be6a476f25a60c10, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/a951f0761b7d4e46bea9380b64bd2c30] into tmpdir=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp, totalSize=43.9 K 2023-05-31 13:57:34,110 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting 8b549f8e365e44a8a01e607162dc8158, keycount=3, bloomtype=ROW, size=8.1 K, encoding=NONE, compression=NONE, seqNum=85, earliestPutTs=1685541431901 2023-05-31 13:57:34,110 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting da56ff19d7c34219be6a476f25a60c10, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=99, earliestPutTs=1685541454057 2023-05-31 13:57:34,111 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting a951f0761b7d4e46bea9380b64bd2c30, keycount=18, bloomtype=ROW, size=23.7 K, encoding=NONE, compression=NONE, seqNum=120, earliestPutTs=1685541454064 2023-05-31 13:57:34,122 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] throttle.PressureAwareThroughputController(145): 65907f62a09743fab59d807d3dccece2#info#compaction#39 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 13:57:34,133 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/292a0cc97a764705ace5f2c39e1f496e as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/292a0cc97a764705ace5f2c39e1f496e 2023-05-31 13:57:34,139 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 65907f62a09743fab59d807d3dccece2/info of 65907f62a09743fab59d807d3dccece2 into 292a0cc97a764705ace5f2c39e1f496e(size=34.5 K), total size for store is 34.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 13:57:34,140 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:57:34,140 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2., storeName=65907f62a09743fab59d807d3dccece2/info, priority=13, startTime=1685541454108; duration=0sec 2023-05-31 13:57:34,140 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 13:57:36,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] regionserver.HRegion(9158): Flush requested on 65907f62a09743fab59d807d3dccece2 2023-05-31 13:57:36,101 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 65907f62a09743fab59d807d3dccece2 1/1 column families, dataSize=8.41 KB heapSize=9.25 KB 2023-05-31 13:57:36,114 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=8.41 KB at sequenceid=132 (bloomFilter=true), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/ee3f2e4bdf7e4679a57db4c646d7f5a6 2023-05-31 13:57:36,121 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/ee3f2e4bdf7e4679a57db4c646d7f5a6 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/ee3f2e4bdf7e4679a57db4c646d7f5a6 2023-05-31 13:57:36,127 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/ee3f2e4bdf7e4679a57db4c646d7f5a6, entries=8, sequenceid=132, filesize=13.2 K 2023-05-31 13:57:36,128 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~8.41 KB/8608, heapSize ~9.23 KB/9456, currentSize=16.81 KB/17216 for 65907f62a09743fab59d807d3dccece2 in 27ms, sequenceid=132, compaction requested=false 2023-05-31 13:57:36,128 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:57:36,130 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] regionserver.HRegion(9158): Flush requested on 65907f62a09743fab59d807d3dccece2 2023-05-31 13:57:36,130 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 65907f62a09743fab59d807d3dccece2 1/1 column families, dataSize=18.91 KB heapSize=20.50 KB 2023-05-31 13:57:36,151 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=65907f62a09743fab59d807d3dccece2, server=jenkins-hbase17.apache.org,38551,1685541418717 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-31 13:57:36,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] ipc.CallRunner(144): callId: 141 service: ClientService methodName: Mutate size: 1.2 K connection: 136.243.18.41:35312 deadline: 1685541466151, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=65907f62a09743fab59d807d3dccece2, server=jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:57:36,549 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=18.91 KB at sequenceid=153 (bloomFilter=true), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/0168bf6a7e434ae9a12b1e162783d933 2023-05-31 13:57:36,556 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/0168bf6a7e434ae9a12b1e162783d933 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/0168bf6a7e434ae9a12b1e162783d933 2023-05-31 13:57:36,562 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/0168bf6a7e434ae9a12b1e162783d933, entries=18, sequenceid=153, filesize=23.7 K 2023-05-31 13:57:36,563 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~18.91 KB/19368, heapSize ~20.48 KB/20976, currentSize=11.56 KB/11836 for 65907f62a09743fab59d807d3dccece2 in 433ms, sequenceid=153, compaction requested=true 2023-05-31 13:57:36,563 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:57:36,563 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 13:57:36,563 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 13:57:36,564 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 73098 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 13:57:36,565 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1912): 65907f62a09743fab59d807d3dccece2/info is initiating minor compaction (all files) 2023-05-31 13:57:36,565 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 65907f62a09743fab59d807d3dccece2/info in TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2. 2023-05-31 13:57:36,565 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/292a0cc97a764705ace5f2c39e1f496e, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/ee3f2e4bdf7e4679a57db4c646d7f5a6, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/0168bf6a7e434ae9a12b1e162783d933] into tmpdir=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp, totalSize=71.4 K 2023-05-31 13:57:36,565 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting 292a0cc97a764705ace5f2c39e1f496e, keycount=28, bloomtype=ROW, size=34.5 K, encoding=NONE, compression=NONE, seqNum=120, earliestPutTs=1685541431901 2023-05-31 13:57:36,566 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting ee3f2e4bdf7e4679a57db4c646d7f5a6, keycount=8, bloomtype=ROW, size=13.2 K, encoding=NONE, compression=NONE, seqNum=132, earliestPutTs=1685541454088 2023-05-31 13:57:36,566 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting 0168bf6a7e434ae9a12b1e162783d933, keycount=18, bloomtype=ROW, size=23.7 K, encoding=NONE, compression=NONE, seqNum=153, earliestPutTs=1685541456103 2023-05-31 13:57:36,581 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] throttle.PressureAwareThroughputController(145): 65907f62a09743fab59d807d3dccece2#info#compaction#42 average throughput is 55.41 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 13:57:36,594 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/2bcfb346cc034b4e8498d5e28662aeaf as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/2bcfb346cc034b4e8498d5e28662aeaf 2023-05-31 13:57:36,600 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 65907f62a09743fab59d807d3dccece2/info of 65907f62a09743fab59d807d3dccece2 into 2bcfb346cc034b4e8498d5e28662aeaf(size=62.0 K), total size for store is 62.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 13:57:36,600 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:57:36,600 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2., storeName=65907f62a09743fab59d807d3dccece2/info, priority=13, startTime=1685541456563; duration=0sec 2023-05-31 13:57:36,600 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 13:57:45,036 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2023-05-31 13:57:45,036 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): data stats (chunk size=2097152): current pool size=2, created chunk count=13, reused chunk count=33, reuseRatio=71.74% 2023-05-31 13:57:46,165 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] regionserver.HRegion(9158): Flush requested on 65907f62a09743fab59d807d3dccece2 2023-05-31 13:57:46,166 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 65907f62a09743fab59d807d3dccece2 1/1 column families, dataSize=12.61 KB heapSize=13.75 KB 2023-05-31 13:57:46,182 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=12.61 KB at sequenceid=169 (bloomFilter=true), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/055fe3dd505848778264703d01929307 2023-05-31 13:57:46,200 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/055fe3dd505848778264703d01929307 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/055fe3dd505848778264703d01929307 2023-05-31 13:57:46,208 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/055fe3dd505848778264703d01929307, entries=12, sequenceid=169, filesize=17.4 K 2023-05-31 13:57:46,209 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~12.61 KB/12912, heapSize ~13.73 KB/14064, currentSize=1.05 KB/1076 for 65907f62a09743fab59d807d3dccece2 in 44ms, sequenceid=169, compaction requested=false 2023-05-31 13:57:46,209 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:57:48,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] regionserver.HRegion(9158): Flush requested on 65907f62a09743fab59d807d3dccece2 2023-05-31 13:57:48,175 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 65907f62a09743fab59d807d3dccece2 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 13:57:48,182 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=179 (bloomFilter=true), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/16383522beb24af795ec9af33eb16260 2023-05-31 13:57:48,189 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/16383522beb24af795ec9af33eb16260 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/16383522beb24af795ec9af33eb16260 2023-05-31 13:57:48,195 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/16383522beb24af795ec9af33eb16260, entries=7, sequenceid=179, filesize=12.1 K 2023-05-31 13:57:48,196 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=15.76 KB/16140 for 65907f62a09743fab59d807d3dccece2 in 21ms, sequenceid=179, compaction requested=true 2023-05-31 13:57:48,196 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:57:48,196 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 13:57:48,196 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 13:57:48,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] regionserver.HRegion(9158): Flush requested on 65907f62a09743fab59d807d3dccece2 2023-05-31 13:57:48,197 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 65907f62a09743fab59d807d3dccece2 1/1 column families, dataSize=16.81 KB heapSize=18.25 KB 2023-05-31 13:57:48,198 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 93734 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 13:57:48,198 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1912): 65907f62a09743fab59d807d3dccece2/info is initiating minor compaction (all files) 2023-05-31 13:57:48,198 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 65907f62a09743fab59d807d3dccece2/info in TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2. 2023-05-31 13:57:48,198 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/2bcfb346cc034b4e8498d5e28662aeaf, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/055fe3dd505848778264703d01929307, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/16383522beb24af795ec9af33eb16260] into tmpdir=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp, totalSize=91.5 K 2023-05-31 13:57:48,198 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting 2bcfb346cc034b4e8498d5e28662aeaf, keycount=54, bloomtype=ROW, size=62.0 K, encoding=NONE, compression=NONE, seqNum=153, earliestPutTs=1685541431901 2023-05-31 13:57:48,199 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting 055fe3dd505848778264703d01929307, keycount=12, bloomtype=ROW, size=17.4 K, encoding=NONE, compression=NONE, seqNum=169, earliestPutTs=1685541456130 2023-05-31 13:57:48,199 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting 16383522beb24af795ec9af33eb16260, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=179, earliestPutTs=1685541466167 2023-05-31 13:57:48,220 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=16.81 KB at sequenceid=198 (bloomFilter=true), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/c9a0d8d2631f474f9b8a3a017c2d9790 2023-05-31 13:57:48,224 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] throttle.PressureAwareThroughputController(145): 65907f62a09743fab59d807d3dccece2#info#compaction#46 average throughput is 37.45 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 13:57:48,227 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/c9a0d8d2631f474f9b8a3a017c2d9790 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/c9a0d8d2631f474f9b8a3a017c2d9790 2023-05-31 13:57:48,239 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/c9a0d8d2631f474f9b8a3a017c2d9790, entries=16, sequenceid=198, filesize=21.6 K 2023-05-31 13:57:48,240 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~16.81 KB/17216, heapSize ~18.23 KB/18672, currentSize=10.51 KB/10760 for 65907f62a09743fab59d807d3dccece2 in 43ms, sequenceid=198, compaction requested=false 2023-05-31 13:57:48,240 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:57:48,242 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/07e530dce81d41b1ab1cce72426006a8 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/07e530dce81d41b1ab1cce72426006a8 2023-05-31 13:57:48,248 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 65907f62a09743fab59d807d3dccece2/info of 65907f62a09743fab59d807d3dccece2 into 07e530dce81d41b1ab1cce72426006a8(size=82.2 K), total size for store is 103.8 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 13:57:48,248 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:57:48,248 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2., storeName=65907f62a09743fab59d807d3dccece2/info, priority=13, startTime=1685541468196; duration=0sec 2023-05-31 13:57:48,248 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 13:57:50,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] regionserver.HRegion(9158): Flush requested on 65907f62a09743fab59d807d3dccece2 2023-05-31 13:57:50,213 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 65907f62a09743fab59d807d3dccece2 1/1 column families, dataSize=11.56 KB heapSize=12.63 KB 2023-05-31 13:57:50,227 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=11.56 KB at sequenceid=213 (bloomFilter=true), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/5acdadfda78e48bca1a70afd266ff032 2023-05-31 13:57:50,234 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/5acdadfda78e48bca1a70afd266ff032 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/5acdadfda78e48bca1a70afd266ff032 2023-05-31 13:57:50,239 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/5acdadfda78e48bca1a70afd266ff032, entries=11, sequenceid=213, filesize=16.3 K 2023-05-31 13:57:50,242 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~11.56 KB/11836, heapSize ~12.61 KB/12912, currentSize=17.86 KB/18292 for 65907f62a09743fab59d807d3dccece2 in 29ms, sequenceid=213, compaction requested=true 2023-05-31 13:57:50,242 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:57:50,242 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 13:57:50,242 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 13:57:50,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] regionserver.HRegion(9158): Flush requested on 65907f62a09743fab59d807d3dccece2 2023-05-31 13:57:50,243 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 65907f62a09743fab59d807d3dccece2 1/1 column families, dataSize=18.91 KB heapSize=20.50 KB 2023-05-31 13:57:50,244 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 123035 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 13:57:50,244 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1912): 65907f62a09743fab59d807d3dccece2/info is initiating minor compaction (all files) 2023-05-31 13:57:50,244 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 65907f62a09743fab59d807d3dccece2/info in TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2. 2023-05-31 13:57:50,244 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/07e530dce81d41b1ab1cce72426006a8, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/c9a0d8d2631f474f9b8a3a017c2d9790, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/5acdadfda78e48bca1a70afd266ff032] into tmpdir=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp, totalSize=120.2 K 2023-05-31 13:57:50,244 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting 07e530dce81d41b1ab1cce72426006a8, keycount=73, bloomtype=ROW, size=82.2 K, encoding=NONE, compression=NONE, seqNum=179, earliestPutTs=1685541431901 2023-05-31 13:57:50,245 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting c9a0d8d2631f474f9b8a3a017c2d9790, keycount=16, bloomtype=ROW, size=21.6 K, encoding=NONE, compression=NONE, seqNum=198, earliestPutTs=1685541468175 2023-05-31 13:57:50,245 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting 5acdadfda78e48bca1a70afd266ff032, keycount=11, bloomtype=ROW, size=16.3 K, encoding=NONE, compression=NONE, seqNum=213, earliestPutTs=1685541468197 2023-05-31 13:57:50,254 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=18.91 KB at sequenceid=234 (bloomFilter=true), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/1084d146543b40c48cb8ac5441e0c580 2023-05-31 13:57:50,258 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] throttle.PressureAwareThroughputController(145): 65907f62a09743fab59d807d3dccece2#info#compaction#49 average throughput is 51.31 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 13:57:50,262 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=65907f62a09743fab59d807d3dccece2, server=jenkins-hbase17.apache.org,38551,1685541418717 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-31 13:57:50,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] ipc.CallRunner(144): callId: 207 service: ClientService methodName: Mutate size: 1.2 K connection: 136.243.18.41:35312 deadline: 1685541480261, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=65907f62a09743fab59d807d3dccece2, server=jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:57:50,263 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/1084d146543b40c48cb8ac5441e0c580 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/1084d146543b40c48cb8ac5441e0c580 2023-05-31 13:57:50,271 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/1084d146543b40c48cb8ac5441e0c580, entries=18, sequenceid=234, filesize=23.7 K 2023-05-31 13:57:50,272 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~18.91 KB/19368, heapSize ~20.48 KB/20976, currentSize=11.56 KB/11836 for 65907f62a09743fab59d807d3dccece2 in 29ms, sequenceid=234, compaction requested=false 2023-05-31 13:57:50,272 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:57:50,273 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/06172dfe499d445d9e72058ae72ffe8e as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/06172dfe499d445d9e72058ae72ffe8e 2023-05-31 13:57:50,279 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 65907f62a09743fab59d807d3dccece2/info of 65907f62a09743fab59d807d3dccece2 into 06172dfe499d445d9e72058ae72ffe8e(size=110.7 K), total size for store is 134.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 13:57:50,279 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:57:50,279 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2., storeName=65907f62a09743fab59d807d3dccece2/info, priority=13, startTime=1685541470242; duration=0sec 2023-05-31 13:57:50,279 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 13:57:51,932 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-31 13:58:00,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] regionserver.HRegion(9158): Flush requested on 65907f62a09743fab59d807d3dccece2 2023-05-31 13:58:00,339 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 65907f62a09743fab59d807d3dccece2 1/1 column families, dataSize=12.61 KB heapSize=13.75 KB 2023-05-31 13:58:00,353 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=12.61 KB at sequenceid=250 (bloomFilter=true), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/12b3a91cbcaa4e948c76ebdf7b794d78 2023-05-31 13:58:00,363 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/12b3a91cbcaa4e948c76ebdf7b794d78 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/12b3a91cbcaa4e948c76ebdf7b794d78 2023-05-31 13:58:00,369 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/12b3a91cbcaa4e948c76ebdf7b794d78, entries=12, sequenceid=250, filesize=17.4 K 2023-05-31 13:58:00,370 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~12.61 KB/12912, heapSize ~13.73 KB/14064, currentSize=1.05 KB/1076 for 65907f62a09743fab59d807d3dccece2 in 32ms, sequenceid=250, compaction requested=true 2023-05-31 13:58:00,370 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:58:00,370 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 13:58:00,370 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 13:58:00,372 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 155485 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 13:58:00,372 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1912): 65907f62a09743fab59d807d3dccece2/info is initiating minor compaction (all files) 2023-05-31 13:58:00,372 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 65907f62a09743fab59d807d3dccece2/info in TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2. 2023-05-31 13:58:00,372 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/06172dfe499d445d9e72058ae72ffe8e, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/1084d146543b40c48cb8ac5441e0c580, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/12b3a91cbcaa4e948c76ebdf7b794d78] into tmpdir=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp, totalSize=151.8 K 2023-05-31 13:58:00,373 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting 06172dfe499d445d9e72058ae72ffe8e, keycount=100, bloomtype=ROW, size=110.7 K, encoding=NONE, compression=NONE, seqNum=213, earliestPutTs=1685541431901 2023-05-31 13:58:00,373 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting 1084d146543b40c48cb8ac5441e0c580, keycount=18, bloomtype=ROW, size=23.7 K, encoding=NONE, compression=NONE, seqNum=234, earliestPutTs=1685541470214 2023-05-31 13:58:00,373 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting 12b3a91cbcaa4e948c76ebdf7b794d78, keycount=12, bloomtype=ROW, size=17.4 K, encoding=NONE, compression=NONE, seqNum=250, earliestPutTs=1685541470244 2023-05-31 13:58:00,386 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] throttle.PressureAwareThroughputController(145): 65907f62a09743fab59d807d3dccece2#info#compaction#51 average throughput is 44.47 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 13:58:00,398 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/b86f515e82cb43d3afc8b929019270a1 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/b86f515e82cb43d3afc8b929019270a1 2023-05-31 13:58:00,404 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 65907f62a09743fab59d807d3dccece2/info of 65907f62a09743fab59d807d3dccece2 into b86f515e82cb43d3afc8b929019270a1(size=142.6 K), total size for store is 142.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 13:58:00,404 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:58:00,404 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2., storeName=65907f62a09743fab59d807d3dccece2/info, priority=13, startTime=1685541480370; duration=0sec 2023-05-31 13:58:00,405 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 13:58:02,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] regionserver.HRegion(9158): Flush requested on 65907f62a09743fab59d807d3dccece2 2023-05-31 13:58:02,361 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 65907f62a09743fab59d807d3dccece2 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 13:58:02,374 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=261 (bloomFilter=true), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/fa3a7fc0bec04b948827cb0d095d20b9 2023-05-31 13:58:02,379 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/fa3a7fc0bec04b948827cb0d095d20b9 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/fa3a7fc0bec04b948827cb0d095d20b9 2023-05-31 13:58:02,385 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/fa3a7fc0bec04b948827cb0d095d20b9, entries=7, sequenceid=261, filesize=12.1 K 2023-05-31 13:58:02,386 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=16.81 KB/17216 for 65907f62a09743fab59d807d3dccece2 in 25ms, sequenceid=261, compaction requested=false 2023-05-31 13:58:02,386 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:58:02,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] regionserver.HRegion(9158): Flush requested on 65907f62a09743fab59d807d3dccece2 2023-05-31 13:58:02,386 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 65907f62a09743fab59d807d3dccece2 1/1 column families, dataSize=17.86 KB heapSize=19.38 KB 2023-05-31 13:58:02,401 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=17.86 KB at sequenceid=281 (bloomFilter=true), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/d545911351c74f6e9960d753fc851afe 2023-05-31 13:58:02,407 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/d545911351c74f6e9960d753fc851afe as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/d545911351c74f6e9960d753fc851afe 2023-05-31 13:58:02,411 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/d545911351c74f6e9960d753fc851afe, entries=17, sequenceid=281, filesize=22.7 K 2023-05-31 13:58:02,412 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~17.86 KB/18292, heapSize ~19.36 KB/19824, currentSize=9.46 KB/9684 for 65907f62a09743fab59d807d3dccece2 in 26ms, sequenceid=281, compaction requested=true 2023-05-31 13:58:02,412 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:58:02,412 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-05-31 13:58:02,412 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 13:58:02,413 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 181686 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 13:58:02,413 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1912): 65907f62a09743fab59d807d3dccece2/info is initiating minor compaction (all files) 2023-05-31 13:58:02,414 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 65907f62a09743fab59d807d3dccece2/info in TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2. 2023-05-31 13:58:02,414 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/b86f515e82cb43d3afc8b929019270a1, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/fa3a7fc0bec04b948827cb0d095d20b9, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/d545911351c74f6e9960d753fc851afe] into tmpdir=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp, totalSize=177.4 K 2023-05-31 13:58:02,414 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting b86f515e82cb43d3afc8b929019270a1, keycount=130, bloomtype=ROW, size=142.6 K, encoding=NONE, compression=NONE, seqNum=250, earliestPutTs=1685541431901 2023-05-31 13:58:02,414 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting fa3a7fc0bec04b948827cb0d095d20b9, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=261, earliestPutTs=1685541480340 2023-05-31 13:58:02,415 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting d545911351c74f6e9960d753fc851afe, keycount=17, bloomtype=ROW, size=22.7 K, encoding=NONE, compression=NONE, seqNum=281, earliestPutTs=1685541482362 2023-05-31 13:58:02,424 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] throttle.PressureAwareThroughputController(145): 65907f62a09743fab59d807d3dccece2#info#compaction#54 average throughput is 79.01 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 13:58:02,439 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/fe2d8e08fdfd45f0a047a26bf6151472 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/fe2d8e08fdfd45f0a047a26bf6151472 2023-05-31 13:58:02,444 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 65907f62a09743fab59d807d3dccece2/info of 65907f62a09743fab59d807d3dccece2 into fe2d8e08fdfd45f0a047a26bf6151472(size=168.0 K), total size for store is 168.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 13:58:02,444 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:58:02,444 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2., storeName=65907f62a09743fab59d807d3dccece2/info, priority=13, startTime=1685541482412; duration=0sec 2023-05-31 13:58:02,444 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 13:58:04,400 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] regionserver.HRegion(9158): Flush requested on 65907f62a09743fab59d807d3dccece2 2023-05-31 13:58:04,401 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 65907f62a09743fab59d807d3dccece2 1/1 column families, dataSize=10.51 KB heapSize=11.50 KB 2023-05-31 13:58:04,417 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.51 KB at sequenceid=295 (bloomFilter=true), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/9acee57e3acc47cc85fd85fe3c92a415 2023-05-31 13:58:04,423 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/9acee57e3acc47cc85fd85fe3c92a415 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/9acee57e3acc47cc85fd85fe3c92a415 2023-05-31 13:58:04,428 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/9acee57e3acc47cc85fd85fe3c92a415, entries=10, sequenceid=295, filesize=15.3 K 2023-05-31 13:58:04,429 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.51 KB/10760, heapSize ~11.48 KB/11760, currentSize=19.96 KB/20444 for 65907f62a09743fab59d807d3dccece2 in 28ms, sequenceid=295, compaction requested=false 2023-05-31 13:58:04,429 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:58:04,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] regionserver.HRegion(9158): Flush requested on 65907f62a09743fab59d807d3dccece2 2023-05-31 13:58:04,429 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 65907f62a09743fab59d807d3dccece2 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-05-31 13:58:04,440 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=318 (bloomFilter=true), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/15b586ff1b4d462ebd282bb61893a82a 2023-05-31 13:58:04,442 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=65907f62a09743fab59d807d3dccece2, server=jenkins-hbase17.apache.org,38551,1685541418717 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-31 13:58:04,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] ipc.CallRunner(144): callId: 273 service: ClientService methodName: Mutate size: 1.2 K connection: 136.243.18.41:35312 deadline: 1685541494441, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=65907f62a09743fab59d807d3dccece2, server=jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:58:04,445 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/15b586ff1b4d462ebd282bb61893a82a as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/15b586ff1b4d462ebd282bb61893a82a 2023-05-31 13:58:04,449 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/15b586ff1b4d462ebd282bb61893a82a, entries=20, sequenceid=318, filesize=25.8 K 2023-05-31 13:58:04,450 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=9.46 KB/9684 for 65907f62a09743fab59d807d3dccece2 in 21ms, sequenceid=318, compaction requested=true 2023-05-31 13:58:04,450 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:58:04,450 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-05-31 13:58:04,450 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 13:58:04,451 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 214166 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 13:58:04,451 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1912): 65907f62a09743fab59d807d3dccece2/info is initiating minor compaction (all files) 2023-05-31 13:58:04,451 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 65907f62a09743fab59d807d3dccece2/info in TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2. 2023-05-31 13:58:04,451 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/fe2d8e08fdfd45f0a047a26bf6151472, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/9acee57e3acc47cc85fd85fe3c92a415, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/15b586ff1b4d462ebd282bb61893a82a] into tmpdir=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp, totalSize=209.1 K 2023-05-31 13:58:04,451 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting fe2d8e08fdfd45f0a047a26bf6151472, keycount=154, bloomtype=ROW, size=168.0 K, encoding=NONE, compression=NONE, seqNum=281, earliestPutTs=1685541431901 2023-05-31 13:58:04,452 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting 9acee57e3acc47cc85fd85fe3c92a415, keycount=10, bloomtype=ROW, size=15.3 K, encoding=NONE, compression=NONE, seqNum=295, earliestPutTs=1685541482387 2023-05-31 13:58:04,452 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] compactions.Compactor(207): Compacting 15b586ff1b4d462ebd282bb61893a82a, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=318, earliestPutTs=1685541484402 2023-05-31 13:58:04,461 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] throttle.PressureAwareThroughputController(145): 65907f62a09743fab59d807d3dccece2#info#compaction#57 average throughput is 94.41 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 13:58:04,469 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/fafec7605a2442beb3ff486fdfa733ff as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/fafec7605a2442beb3ff486fdfa733ff 2023-05-31 13:58:04,474 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 65907f62a09743fab59d807d3dccece2/info of 65907f62a09743fab59d807d3dccece2 into fafec7605a2442beb3ff486fdfa733ff(size=199.8 K), total size for store is 199.8 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 13:58:04,474 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:58:04,474 INFO [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2., storeName=65907f62a09743fab59d807d3dccece2/info, priority=13, startTime=1685541484450; duration=0sec 2023-05-31 13:58:04,474 DEBUG [RS:0;jenkins-hbase17:38551-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 13:58:14,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38551] regionserver.HRegion(9158): Flush requested on 65907f62a09743fab59d807d3dccece2 2023-05-31 13:58:14,521 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 65907f62a09743fab59d807d3dccece2 1/1 column families, dataSize=10.51 KB heapSize=11.50 KB 2023-05-31 13:58:14,533 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.51 KB at sequenceid=332 (bloomFilter=true), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/188fc90790074eaab1614a268b556072 2023-05-31 13:58:14,539 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/188fc90790074eaab1614a268b556072 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/188fc90790074eaab1614a268b556072 2023-05-31 13:58:14,545 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/188fc90790074eaab1614a268b556072, entries=10, sequenceid=332, filesize=15.3 K 2023-05-31 13:58:14,546 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.51 KB/10760, heapSize ~11.48 KB/11760, currentSize=1.05 KB/1076 for 65907f62a09743fab59d807d3dccece2 in 25ms, sequenceid=332, compaction requested=false 2023-05-31 13:58:14,546 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:58:16,522 INFO [Listener at localhost.localdomain/43373] wal.AbstractTestLogRolling(188): after writing there are 0 log files 2023-05-31 13:58:16,547 INFO [Listener at localhost.localdomain/43373] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/WALs/jenkins-hbase17.apache.org,38551,1685541418717/jenkins-hbase17.apache.org%2C38551%2C1685541418717.1685541419102 with entries=316, filesize=309.16 KB; new WAL /user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/WALs/jenkins-hbase17.apache.org,38551,1685541418717/jenkins-hbase17.apache.org%2C38551%2C1685541418717.1685541496523 2023-05-31 13:58:16,547 DEBUG [Listener at localhost.localdomain/43373] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42485,DS-3b20b628-1ef6-45cf-8556-72c9c9875eb2,DISK], DatanodeInfoWithStorage[127.0.0.1:46077,DS-7c112df3-8de2-4cd4-aeba-1f3929bcfa7a,DISK]] 2023-05-31 13:58:16,547 DEBUG [Listener at localhost.localdomain/43373] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/WALs/jenkins-hbase17.apache.org,38551,1685541418717/jenkins-hbase17.apache.org%2C38551%2C1685541418717.1685541419102 is not closed yet, will try archiving it next time 2023-05-31 13:58:16,553 DEBUG [Listener at localhost.localdomain/43373] regionserver.HRegion(2446): Flush status journal for 6a39871884e9043af0b948703cfa5d61: 2023-05-31 13:58:16,553 INFO [Listener at localhost.localdomain/43373] regionserver.HRegion(2745): Flushing 1475049869caf7e067eb43bc572e3848 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-31 13:58:16,568 INFO [Listener at localhost.localdomain/43373] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/namespace/1475049869caf7e067eb43bc572e3848/.tmp/info/a4fb0db618e0476abaed62f6cd70a365 2023-05-31 13:58:16,573 DEBUG [Listener at localhost.localdomain/43373] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/namespace/1475049869caf7e067eb43bc572e3848/.tmp/info/a4fb0db618e0476abaed62f6cd70a365 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/namespace/1475049869caf7e067eb43bc572e3848/info/a4fb0db618e0476abaed62f6cd70a365 2023-05-31 13:58:16,578 INFO [Listener at localhost.localdomain/43373] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/namespace/1475049869caf7e067eb43bc572e3848/info/a4fb0db618e0476abaed62f6cd70a365, entries=2, sequenceid=6, filesize=4.8 K 2023-05-31 13:58:16,579 INFO [Listener at localhost.localdomain/43373] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 1475049869caf7e067eb43bc572e3848 in 26ms, sequenceid=6, compaction requested=false 2023-05-31 13:58:16,580 DEBUG [Listener at localhost.localdomain/43373] regionserver.HRegion(2446): Flush status journal for 1475049869caf7e067eb43bc572e3848: 2023-05-31 13:58:16,580 INFO [Listener at localhost.localdomain/43373] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.26 KB heapSize=4.19 KB 2023-05-31 13:58:16,589 INFO [Listener at localhost.localdomain/43373] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.26 KB at sequenceid=24 (bloomFilter=false), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740/.tmp/info/f27da7831e6a4d0ca01f25602a17c905 2023-05-31 13:58:16,594 DEBUG [Listener at localhost.localdomain/43373] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740/.tmp/info/f27da7831e6a4d0ca01f25602a17c905 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740/info/f27da7831e6a4d0ca01f25602a17c905 2023-05-31 13:58:16,600 INFO [Listener at localhost.localdomain/43373] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740/info/f27da7831e6a4d0ca01f25602a17c905, entries=16, sequenceid=24, filesize=7.0 K 2023-05-31 13:58:16,601 INFO [Listener at localhost.localdomain/43373] regionserver.HRegion(2948): Finished flush of dataSize ~2.26 KB/2316, heapSize ~3.67 KB/3760, currentSize=0 B/0 for 1588230740 in 21ms, sequenceid=24, compaction requested=false 2023-05-31 13:58:16,601 DEBUG [Listener at localhost.localdomain/43373] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-31 13:58:16,601 INFO [Listener at localhost.localdomain/43373] regionserver.HRegion(2745): Flushing 65907f62a09743fab59d807d3dccece2 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-31 13:58:16,612 INFO [Listener at localhost.localdomain/43373] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=336 (bloomFilter=true), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/f4afb616469b412f902c19684890e7c3 2023-05-31 13:58:16,619 DEBUG [Listener at localhost.localdomain/43373] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/.tmp/info/f4afb616469b412f902c19684890e7c3 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/f4afb616469b412f902c19684890e7c3 2023-05-31 13:58:16,624 INFO [Listener at localhost.localdomain/43373] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/f4afb616469b412f902c19684890e7c3, entries=1, sequenceid=336, filesize=5.8 K 2023-05-31 13:58:16,625 INFO [Listener at localhost.localdomain/43373] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 65907f62a09743fab59d807d3dccece2 in 24ms, sequenceid=336, compaction requested=true 2023-05-31 13:58:16,625 DEBUG [Listener at localhost.localdomain/43373] regionserver.HRegion(2446): Flush status journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:58:16,639 INFO [Listener at localhost.localdomain/43373] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/WALs/jenkins-hbase17.apache.org,38551,1685541418717/jenkins-hbase17.apache.org%2C38551%2C1685541418717.1685541496523 with entries=4, filesize=1.22 KB; new WAL /user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/WALs/jenkins-hbase17.apache.org,38551,1685541418717/jenkins-hbase17.apache.org%2C38551%2C1685541418717.1685541496625 2023-05-31 13:58:16,639 DEBUG [Listener at localhost.localdomain/43373] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42485,DS-3b20b628-1ef6-45cf-8556-72c9c9875eb2,DISK], DatanodeInfoWithStorage[127.0.0.1:46077,DS-7c112df3-8de2-4cd4-aeba-1f3929bcfa7a,DISK]] 2023-05-31 13:58:16,640 DEBUG [Listener at localhost.localdomain/43373] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/WALs/jenkins-hbase17.apache.org,38551,1685541418717/jenkins-hbase17.apache.org%2C38551%2C1685541418717.1685541496523 is not closed yet, will try archiving it next time 2023-05-31 13:58:16,641 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/WALs/jenkins-hbase17.apache.org,38551,1685541418717/jenkins-hbase17.apache.org%2C38551%2C1685541418717.1685541419102 to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/oldWALs/jenkins-hbase17.apache.org%2C38551%2C1685541418717.1685541419102 2023-05-31 13:58:16,641 INFO [Listener at localhost.localdomain/43373] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-05-31 13:58:16,643 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/WALs/jenkins-hbase17.apache.org,38551,1685541418717/jenkins-hbase17.apache.org%2C38551%2C1685541418717.1685541496523 to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/oldWALs/jenkins-hbase17.apache.org%2C38551%2C1685541418717.1685541496523 2023-05-31 13:58:16,742 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-31 13:58:16,742 INFO [Listener at localhost.localdomain/43373] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-31 13:58:16,742 DEBUG [Listener at localhost.localdomain/43373] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x78d0f774 to 127.0.0.1:61551 2023-05-31 13:58:16,743 DEBUG [Listener at localhost.localdomain/43373] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:58:16,743 DEBUG [Listener at localhost.localdomain/43373] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-31 13:58:16,743 DEBUG [Listener at localhost.localdomain/43373] util.JVMClusterUtil(257): Found active master hash=1301300705, stopped=false 2023-05-31 13:58:16,743 INFO [Listener at localhost.localdomain/43373] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,39261,1685541418672 2023-05-31 13:58:16,745 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 13:58:16,745 INFO [Listener at localhost.localdomain/43373] procedure2.ProcedureExecutor(629): Stopping 2023-05-31 13:58:16,745 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:58:16,745 DEBUG [Listener at localhost.localdomain/43373] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x54d43635 to 127.0.0.1:61551 2023-05-31 13:58:16,746 DEBUG [Listener at localhost.localdomain/43373] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:58:16,746 INFO [Listener at localhost.localdomain/43373] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,38551,1685541418717' ***** 2023-05-31 13:58:16,746 INFO [Listener at localhost.localdomain/43373] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-31 13:58:16,746 INFO [RS:0;jenkins-hbase17:38551] regionserver.HeapMemoryManager(220): Stopping 2023-05-31 13:58:16,746 INFO [RS:0;jenkins-hbase17:38551] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-31 13:58:16,746 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): regionserver:38551-0x100818671b60001, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 13:58:16,746 INFO [RS:0;jenkins-hbase17:38551] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-31 13:58:16,746 INFO [RS:0;jenkins-hbase17:38551] regionserver.HRegionServer(3303): Received CLOSE for 6a39871884e9043af0b948703cfa5d61 2023-05-31 13:58:16,746 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-31 13:58:16,746 INFO [RS:0;jenkins-hbase17:38551] regionserver.HRegionServer(3303): Received CLOSE for 1475049869caf7e067eb43bc572e3848 2023-05-31 13:58:16,746 INFO [RS:0;jenkins-hbase17:38551] regionserver.HRegionServer(3303): Received CLOSE for 65907f62a09743fab59d807d3dccece2 2023-05-31 13:58:16,746 INFO [RS:0;jenkins-hbase17:38551] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:58:16,746 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 6a39871884e9043af0b948703cfa5d61, disabling compactions & flushes 2023-05-31 13:58:16,747 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:58:16,747 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685541442061.6a39871884e9043af0b948703cfa5d61. 2023-05-31 13:58:16,747 DEBUG [RS:0;jenkins-hbase17:38551] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x404f6623 to 127.0.0.1:61551 2023-05-31 13:58:16,747 DEBUG [RS:0;jenkins-hbase17:38551] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:58:16,747 INFO [RS:0;jenkins-hbase17:38551] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-31 13:58:16,747 INFO [RS:0;jenkins-hbase17:38551] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-31 13:58:16,747 INFO [RS:0;jenkins-hbase17:38551] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-31 13:58:16,747 INFO [RS:0;jenkins-hbase17:38551] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-31 13:58:16,747 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685541442061.6a39871884e9043af0b948703cfa5d61. 2023-05-31 13:58:16,747 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685541442061.6a39871884e9043af0b948703cfa5d61. after waiting 0 ms 2023-05-31 13:58:16,747 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685541442061.6a39871884e9043af0b948703cfa5d61. 2023-05-31 13:58:16,748 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38551-0x100818671b60001, quorum=127.0.0.1:61551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:58:16,748 INFO [RS:0;jenkins-hbase17:38551] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-05-31 13:58:16,748 DEBUG [RS:0;jenkins-hbase17:38551] regionserver.HRegionServer(1478): Online Regions={6a39871884e9043af0b948703cfa5d61=TestLogRolling-testLogRolling,,1685541442061.6a39871884e9043af0b948703cfa5d61., 1475049869caf7e067eb43bc572e3848=hbase:namespace,,1685541419277.1475049869caf7e067eb43bc572e3848., 1588230740=hbase:meta,,1.1588230740, 65907f62a09743fab59d807d3dccece2=TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.} 2023-05-31 13:58:16,748 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 13:58:16,748 DEBUG [RS:0;jenkins-hbase17:38551] regionserver.HRegionServer(1504): Waiting on 1475049869caf7e067eb43bc572e3848, 1588230740, 65907f62a09743fab59d807d3dccece2, 6a39871884e9043af0b948703cfa5d61 2023-05-31 13:58:16,748 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 13:58:16,748 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 13:58:16,748 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 13:58:16,748 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 13:58:16,758 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685541442061.6a39871884e9043af0b948703cfa5d61.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/6a39871884e9043af0b948703cfa5d61/info/6631af04418a45789a1c559506de7b77.c341035db8c45e6c5c51442cddc53e7d->hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/6631af04418a45789a1c559506de7b77-bottom] to archive 2023-05-31 13:58:16,761 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685541442061.6a39871884e9043af0b948703cfa5d61.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-31 13:58:16,764 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685541442061.6a39871884e9043af0b948703cfa5d61.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/6a39871884e9043af0b948703cfa5d61/info/6631af04418a45789a1c559506de7b77.c341035db8c45e6c5c51442cddc53e7d to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/6a39871884e9043af0b948703cfa5d61/info/6631af04418a45789a1c559506de7b77.c341035db8c45e6c5c51442cddc53e7d 2023-05-31 13:58:16,767 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/meta/1588230740/recovered.edits/27.seqid, newMaxSeqId=27, maxSeqId=1 2023-05-31 13:58:16,769 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-31 13:58:16,770 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 13:58:16,770 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 13:58:16,770 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-31 13:58:16,771 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/6a39871884e9043af0b948703cfa5d61/recovered.edits/93.seqid, newMaxSeqId=93, maxSeqId=88 2023-05-31 13:58:16,772 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685541442061.6a39871884e9043af0b948703cfa5d61. 2023-05-31 13:58:16,772 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 6a39871884e9043af0b948703cfa5d61: 2023-05-31 13:58:16,772 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,,1685541442061.6a39871884e9043af0b948703cfa5d61. 2023-05-31 13:58:16,773 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1475049869caf7e067eb43bc572e3848, disabling compactions & flushes 2023-05-31 13:58:16,773 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685541419277.1475049869caf7e067eb43bc572e3848. 2023-05-31 13:58:16,773 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685541419277.1475049869caf7e067eb43bc572e3848. 2023-05-31 13:58:16,773 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685541419277.1475049869caf7e067eb43bc572e3848. after waiting 0 ms 2023-05-31 13:58:16,773 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685541419277.1475049869caf7e067eb43bc572e3848. 2023-05-31 13:58:16,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/hbase/namespace/1475049869caf7e067eb43bc572e3848/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-31 13:58:16,779 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685541419277.1475049869caf7e067eb43bc572e3848. 2023-05-31 13:58:16,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1475049869caf7e067eb43bc572e3848: 2023-05-31 13:58:16,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685541419277.1475049869caf7e067eb43bc572e3848. 2023-05-31 13:58:16,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 65907f62a09743fab59d807d3dccece2, disabling compactions & flushes 2023-05-31 13:58:16,779 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2. 2023-05-31 13:58:16,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2. 2023-05-31 13:58:16,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2. after waiting 0 ms 2023-05-31 13:58:16,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2. 2023-05-31 13:58:16,796 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/6631af04418a45789a1c559506de7b77.c341035db8c45e6c5c51442cddc53e7d->hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/c341035db8c45e6c5c51442cddc53e7d/info/6631af04418a45789a1c559506de7b77-top, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/8b549f8e365e44a8a01e607162dc8158, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/TestLogRolling-testLogRolling=c341035db8c45e6c5c51442cddc53e7d-3e500bcc0d9e42589b3135ed4e1bac84, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/da56ff19d7c34219be6a476f25a60c10, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/292a0cc97a764705ace5f2c39e1f496e, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/a951f0761b7d4e46bea9380b64bd2c30, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/ee3f2e4bdf7e4679a57db4c646d7f5a6, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/2bcfb346cc034b4e8498d5e28662aeaf, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/0168bf6a7e434ae9a12b1e162783d933, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/055fe3dd505848778264703d01929307, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/07e530dce81d41b1ab1cce72426006a8, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/16383522beb24af795ec9af33eb16260, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/c9a0d8d2631f474f9b8a3a017c2d9790, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/06172dfe499d445d9e72058ae72ffe8e, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/5acdadfda78e48bca1a70afd266ff032, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/1084d146543b40c48cb8ac5441e0c580, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/b86f515e82cb43d3afc8b929019270a1, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/12b3a91cbcaa4e948c76ebdf7b794d78, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/fa3a7fc0bec04b948827cb0d095d20b9, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/fe2d8e08fdfd45f0a047a26bf6151472, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/d545911351c74f6e9960d753fc851afe, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/9acee57e3acc47cc85fd85fe3c92a415, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/15b586ff1b4d462ebd282bb61893a82a] to archive 2023-05-31 13:58:16,797 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-31 13:58:16,799 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/6631af04418a45789a1c559506de7b77.c341035db8c45e6c5c51442cddc53e7d to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/6631af04418a45789a1c559506de7b77.c341035db8c45e6c5c51442cddc53e7d 2023-05-31 13:58:16,800 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/8b549f8e365e44a8a01e607162dc8158 to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/8b549f8e365e44a8a01e607162dc8158 2023-05-31 13:58:16,801 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/TestLogRolling-testLogRolling=c341035db8c45e6c5c51442cddc53e7d-3e500bcc0d9e42589b3135ed4e1bac84 to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/TestLogRolling-testLogRolling=c341035db8c45e6c5c51442cddc53e7d-3e500bcc0d9e42589b3135ed4e1bac84 2023-05-31 13:58:16,803 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/da56ff19d7c34219be6a476f25a60c10 to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/da56ff19d7c34219be6a476f25a60c10 2023-05-31 13:58:16,804 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/292a0cc97a764705ace5f2c39e1f496e to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/292a0cc97a764705ace5f2c39e1f496e 2023-05-31 13:58:16,805 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/a951f0761b7d4e46bea9380b64bd2c30 to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/a951f0761b7d4e46bea9380b64bd2c30 2023-05-31 13:58:16,806 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/ee3f2e4bdf7e4679a57db4c646d7f5a6 to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/ee3f2e4bdf7e4679a57db4c646d7f5a6 2023-05-31 13:58:16,808 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/2bcfb346cc034b4e8498d5e28662aeaf to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/2bcfb346cc034b4e8498d5e28662aeaf 2023-05-31 13:58:16,809 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/0168bf6a7e434ae9a12b1e162783d933 to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/0168bf6a7e434ae9a12b1e162783d933 2023-05-31 13:58:16,810 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/055fe3dd505848778264703d01929307 to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/055fe3dd505848778264703d01929307 2023-05-31 13:58:16,811 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/07e530dce81d41b1ab1cce72426006a8 to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/07e530dce81d41b1ab1cce72426006a8 2023-05-31 13:58:16,812 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/16383522beb24af795ec9af33eb16260 to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/16383522beb24af795ec9af33eb16260 2023-05-31 13:58:16,813 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/c9a0d8d2631f474f9b8a3a017c2d9790 to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/c9a0d8d2631f474f9b8a3a017c2d9790 2023-05-31 13:58:16,814 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/06172dfe499d445d9e72058ae72ffe8e to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/06172dfe499d445d9e72058ae72ffe8e 2023-05-31 13:58:16,816 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/5acdadfda78e48bca1a70afd266ff032 to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/5acdadfda78e48bca1a70afd266ff032 2023-05-31 13:58:16,817 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/1084d146543b40c48cb8ac5441e0c580 to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/1084d146543b40c48cb8ac5441e0c580 2023-05-31 13:58:16,818 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/b86f515e82cb43d3afc8b929019270a1 to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/b86f515e82cb43d3afc8b929019270a1 2023-05-31 13:58:16,819 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/12b3a91cbcaa4e948c76ebdf7b794d78 to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/12b3a91cbcaa4e948c76ebdf7b794d78 2023-05-31 13:58:16,821 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/fa3a7fc0bec04b948827cb0d095d20b9 to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/fa3a7fc0bec04b948827cb0d095d20b9 2023-05-31 13:58:16,822 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/fe2d8e08fdfd45f0a047a26bf6151472 to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/fe2d8e08fdfd45f0a047a26bf6151472 2023-05-31 13:58:16,823 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/d545911351c74f6e9960d753fc851afe to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/d545911351c74f6e9960d753fc851afe 2023-05-31 13:58:16,824 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/9acee57e3acc47cc85fd85fe3c92a415 to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/9acee57e3acc47cc85fd85fe3c92a415 2023-05-31 13:58:16,826 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/15b586ff1b4d462ebd282bb61893a82a to hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/archive/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/info/15b586ff1b4d462ebd282bb61893a82a 2023-05-31 13:58:16,830 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/data/default/TestLogRolling-testLogRolling/65907f62a09743fab59d807d3dccece2/recovered.edits/339.seqid, newMaxSeqId=339, maxSeqId=88 2023-05-31 13:58:16,831 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2. 2023-05-31 13:58:16,831 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 65907f62a09743fab59d807d3dccece2: 2023-05-31 13:58:16,831 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,row0062,1685541442061.65907f62a09743fab59d807d3dccece2. 2023-05-31 13:58:16,948 INFO [RS:0;jenkins-hbase17:38551] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,38551,1685541418717; all regions closed. 2023-05-31 13:58:16,949 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/WALs/jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:58:16,956 DEBUG [RS:0;jenkins-hbase17:38551] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/oldWALs 2023-05-31 13:58:16,956 INFO [RS:0;jenkins-hbase17:38551] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase17.apache.org%2C38551%2C1685541418717.meta:.meta(num 1685541419227) 2023-05-31 13:58:16,956 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/WALs/jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:58:16,962 DEBUG [RS:0;jenkins-hbase17:38551] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/oldWALs 2023-05-31 13:58:16,963 INFO [RS:0;jenkins-hbase17:38551] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase17.apache.org%2C38551%2C1685541418717:(num 1685541496625) 2023-05-31 13:58:16,963 DEBUG [RS:0;jenkins-hbase17:38551] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:58:16,963 INFO [RS:0;jenkins-hbase17:38551] regionserver.LeaseManager(133): Closed leases 2023-05-31 13:58:16,963 INFO [RS:0;jenkins-hbase17:38551] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-31 13:58:16,963 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 13:58:16,964 INFO [RS:0;jenkins-hbase17:38551] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:38551 2023-05-31 13:58:16,967 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 13:58:16,967 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): regionserver:38551-0x100818671b60001, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,38551,1685541418717 2023-05-31 13:58:16,968 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): regionserver:38551-0x100818671b60001, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 13:58:16,969 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,38551,1685541418717] 2023-05-31 13:58:16,969 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,38551,1685541418717; numProcessing=1 2023-05-31 13:58:16,970 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,38551,1685541418717 already deleted, retry=false 2023-05-31 13:58:16,970 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,38551,1685541418717 expired; onlineServers=0 2023-05-31 13:58:16,970 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,39261,1685541418672' ***** 2023-05-31 13:58:16,970 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-31 13:58:16,971 DEBUG [M:0;jenkins-hbase17:39261] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@49afc88b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-05-31 13:58:16,971 INFO [M:0;jenkins-hbase17:39261] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,39261,1685541418672 2023-05-31 13:58:16,971 INFO [M:0;jenkins-hbase17:39261] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,39261,1685541418672; all regions closed. 2023-05-31 13:58:16,971 DEBUG [M:0;jenkins-hbase17:39261] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:58:16,971 DEBUG [M:0;jenkins-hbase17:39261] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-31 13:58:16,971 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-31 13:58:16,971 DEBUG [M:0;jenkins-hbase17:39261] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-31 13:58:16,971 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1685541418861] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1685541418861,5,FailOnTimeoutGroup] 2023-05-31 13:58:16,971 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1685541418861] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1685541418861,5,FailOnTimeoutGroup] 2023-05-31 13:58:16,972 INFO [M:0;jenkins-hbase17:39261] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-31 13:58:16,972 INFO [M:0;jenkins-hbase17:39261] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-31 13:58:16,972 INFO [M:0;jenkins-hbase17:39261] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [] on shutdown 2023-05-31 13:58:16,973 DEBUG [M:0;jenkins-hbase17:39261] master.HMaster(1512): Stopping service threads 2023-05-31 13:58:16,973 INFO [M:0;jenkins-hbase17:39261] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-31 13:58:16,973 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-31 13:58:16,973 ERROR [M:0;jenkins-hbase17:39261] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-31 13:58:16,973 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:58:16,973 INFO [M:0;jenkins-hbase17:39261] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-31 13:58:16,973 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-31 13:58:16,973 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 13:58:16,974 DEBUG [M:0;jenkins-hbase17:39261] zookeeper.ZKUtil(398): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-31 13:58:16,974 WARN [M:0;jenkins-hbase17:39261] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-31 13:58:16,974 INFO [M:0;jenkins-hbase17:39261] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-31 13:58:16,976 INFO [M:0;jenkins-hbase17:39261] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-31 13:58:16,976 DEBUG [M:0;jenkins-hbase17:39261] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 13:58:16,976 INFO [M:0;jenkins-hbase17:39261] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:58:16,976 DEBUG [M:0;jenkins-hbase17:39261] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:58:16,976 DEBUG [M:0;jenkins-hbase17:39261] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 13:58:16,976 DEBUG [M:0;jenkins-hbase17:39261] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:58:16,976 INFO [M:0;jenkins-hbase17:39261] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=64.77 KB heapSize=78.52 KB 2023-05-31 13:58:16,982 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-31 13:58:16,989 INFO [M:0;jenkins-hbase17:39261] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=64.77 KB at sequenceid=160 (bloomFilter=true), to=hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7510ad4c596141e2a76599a892c51754 2023-05-31 13:58:16,995 INFO [M:0;jenkins-hbase17:39261] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7510ad4c596141e2a76599a892c51754 2023-05-31 13:58:16,997 DEBUG [M:0;jenkins-hbase17:39261] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7510ad4c596141e2a76599a892c51754 as hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7510ad4c596141e2a76599a892c51754 2023-05-31 13:58:17,005 INFO [M:0;jenkins-hbase17:39261] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7510ad4c596141e2a76599a892c51754 2023-05-31 13:58:17,005 INFO [M:0;jenkins-hbase17:39261] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40225/user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7510ad4c596141e2a76599a892c51754, entries=18, sequenceid=160, filesize=6.9 K 2023-05-31 13:58:17,006 INFO [M:0;jenkins-hbase17:39261] regionserver.HRegion(2948): Finished flush of dataSize ~64.77 KB/66320, heapSize ~78.51 KB/80392, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 30ms, sequenceid=160, compaction requested=false 2023-05-31 13:58:17,008 INFO [M:0;jenkins-hbase17:39261] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:58:17,008 DEBUG [M:0;jenkins-hbase17:39261] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 13:58:17,008 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/a4017835-237e-f647-48d9-c6f7ae9999d5/MasterData/WALs/jenkins-hbase17.apache.org,39261,1685541418672 2023-05-31 13:58:17,012 INFO [M:0;jenkins-hbase17:39261] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-31 13:58:17,012 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 13:58:17,012 INFO [M:0;jenkins-hbase17:39261] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:39261 2023-05-31 13:58:17,014 DEBUG [M:0;jenkins-hbase17:39261] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,39261,1685541418672 already deleted, retry=false 2023-05-31 13:58:17,069 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): regionserver:38551-0x100818671b60001, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:58:17,069 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): regionserver:38551-0x100818671b60001, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:58:17,069 INFO [RS:0;jenkins-hbase17:38551] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,38551,1685541418717; zookeeper connection closed. 2023-05-31 13:58:17,071 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3f61f14b] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3f61f14b 2023-05-31 13:58:17,071 INFO [Listener at localhost.localdomain/43373] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-31 13:58:17,169 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:58:17,170 DEBUG [Listener at localhost.localdomain/43373-EventThread] zookeeper.ZKWatcher(600): master:39261-0x100818671b60000, quorum=127.0.0.1:61551, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:58:17,170 INFO [M:0;jenkins-hbase17:39261] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,39261,1685541418672; zookeeper connection closed. 2023-05-31 13:58:17,172 WARN [Listener at localhost.localdomain/43373] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 13:58:17,181 INFO [Listener at localhost.localdomain/43373] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 13:58:17,288 WARN [BP-1972876440-136.243.18.41-1685541418177 heartbeating to localhost.localdomain/127.0.0.1:40225] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 13:58:17,288 WARN [BP-1972876440-136.243.18.41-1685541418177 heartbeating to localhost.localdomain/127.0.0.1:40225] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1972876440-136.243.18.41-1685541418177 (Datanode Uuid c6cebfaf-ffe2-4c03-93d2-607def6d11b5) service to localhost.localdomain/127.0.0.1:40225 2023-05-31 13:58:17,289 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/cluster_92e29406-5668-e58b-f33f-a50536b257be/dfs/data/data3/current/BP-1972876440-136.243.18.41-1685541418177] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:58:17,290 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/cluster_92e29406-5668-e58b-f33f-a50536b257be/dfs/data/data4/current/BP-1972876440-136.243.18.41-1685541418177] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:58:17,291 WARN [Listener at localhost.localdomain/43373] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 13:58:17,297 INFO [Listener at localhost.localdomain/43373] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 13:58:17,406 WARN [BP-1972876440-136.243.18.41-1685541418177 heartbeating to localhost.localdomain/127.0.0.1:40225] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 13:58:17,406 WARN [BP-1972876440-136.243.18.41-1685541418177 heartbeating to localhost.localdomain/127.0.0.1:40225] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1972876440-136.243.18.41-1685541418177 (Datanode Uuid 428e0772-ec96-45d3-a469-5e068b10505d) service to localhost.localdomain/127.0.0.1:40225 2023-05-31 13:58:17,407 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/cluster_92e29406-5668-e58b-f33f-a50536b257be/dfs/data/data1/current/BP-1972876440-136.243.18.41-1685541418177] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:58:17,407 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/cluster_92e29406-5668-e58b-f33f-a50536b257be/dfs/data/data2/current/BP-1972876440-136.243.18.41-1685541418177] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:58:17,429 INFO [Listener at localhost.localdomain/43373] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-31 13:58:17,553 INFO [Listener at localhost.localdomain/43373] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-31 13:58:17,584 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-31 13:58:17,593 INFO [Listener at localhost.localdomain/43373] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRolling Thread=107 (was 96) - Thread LEAK? -, OpenFileDescriptor=530 (was 499) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=112 (was 137), ProcessCount=168 (was 168), AvailableMemoryMB=7252 (was 7329) 2023-05-31 13:58:17,603 INFO [Listener at localhost.localdomain/43373] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=107, OpenFileDescriptor=530, MaxFileDescriptor=60000, SystemLoadAverage=112, ProcessCount=168, AvailableMemoryMB=7252 2023-05-31 13:58:17,603 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-31 13:58:17,603 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/hadoop.log.dir so I do NOT create it in target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71 2023-05-31 13:58:17,603 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/daca14f1-68ca-392d-c507-4d7339719b62/hadoop.tmp.dir so I do NOT create it in target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71 2023-05-31 13:58:17,603 INFO [Listener at localhost.localdomain/43373] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/cluster_83886268-3cab-2ab6-64ec-4e744159e3f9, deleteOnExit=true 2023-05-31 13:58:17,603 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-31 13:58:17,603 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/test.cache.data in system properties and HBase conf 2023-05-31 13:58:17,603 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/hadoop.tmp.dir in system properties and HBase conf 2023-05-31 13:58:17,604 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/hadoop.log.dir in system properties and HBase conf 2023-05-31 13:58:17,604 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-31 13:58:17,604 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-31 13:58:17,604 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-31 13:58:17,604 DEBUG [Listener at localhost.localdomain/43373] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-31 13:58:17,604 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-31 13:58:17,604 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-31 13:58:17,604 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-31 13:58:17,605 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 13:58:17,605 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-31 13:58:17,605 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-31 13:58:17,605 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 13:58:17,605 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 13:58:17,605 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-31 13:58:17,605 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/nfs.dump.dir in system properties and HBase conf 2023-05-31 13:58:17,605 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/java.io.tmpdir in system properties and HBase conf 2023-05-31 13:58:17,606 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 13:58:17,606 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-31 13:58:17,606 INFO [Listener at localhost.localdomain/43373] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-31 13:58:17,607 WARN [Listener at localhost.localdomain/43373] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 13:58:17,609 WARN [Listener at localhost.localdomain/43373] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 13:58:17,609 WARN [Listener at localhost.localdomain/43373] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 13:58:17,634 WARN [Listener at localhost.localdomain/43373] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:58:17,635 INFO [Listener at localhost.localdomain/43373] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:58:17,641 INFO [Listener at localhost.localdomain/43373] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/java.io.tmpdir/Jetty_localhost_localdomain_35091_hdfs____2jgosg/webapp 2023-05-31 13:58:17,722 INFO [Listener at localhost.localdomain/43373] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:35091 2023-05-31 13:58:17,724 WARN [Listener at localhost.localdomain/43373] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 13:58:17,725 WARN [Listener at localhost.localdomain/43373] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 13:58:17,725 WARN [Listener at localhost.localdomain/43373] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 13:58:17,748 WARN [Listener at localhost.localdomain/36575] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:58:17,762 WARN [Listener at localhost.localdomain/36575] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 13:58:17,764 WARN [Listener at localhost.localdomain/36575] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:58:17,765 INFO [Listener at localhost.localdomain/36575] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:58:17,769 INFO [Listener at localhost.localdomain/36575] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/java.io.tmpdir/Jetty_localhost_46837_datanode____vuu6kh/webapp 2023-05-31 13:58:17,840 INFO [Listener at localhost.localdomain/36575] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46837 2023-05-31 13:58:17,845 WARN [Listener at localhost.localdomain/45161] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:58:17,854 WARN [Listener at localhost.localdomain/45161] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 13:58:17,856 WARN [Listener at localhost.localdomain/45161] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 13:58:17,858 INFO [Listener at localhost.localdomain/45161] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 13:58:17,862 INFO [Listener at localhost.localdomain/45161] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/java.io.tmpdir/Jetty_localhost_37691_datanode____15a2mp/webapp 2023-05-31 13:58:17,895 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb093fafe8e6db9ed: Processing first storage report for DS-fbd4b6eb-53b6-4f9f-bd8a-338617f4ebe1 from datanode 8e4665d5-471c-4b6b-ad88-16fd9a403fe6 2023-05-31 13:58:17,895 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb093fafe8e6db9ed: from storage DS-fbd4b6eb-53b6-4f9f-bd8a-338617f4ebe1 node DatanodeRegistration(127.0.0.1:35507, datanodeUuid=8e4665d5-471c-4b6b-ad88-16fd9a403fe6, infoPort=38493, infoSecurePort=0, ipcPort=45161, storageInfo=lv=-57;cid=testClusterID;nsid=572376699;c=1685541497611), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:58:17,895 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb093fafe8e6db9ed: Processing first storage report for DS-4dfc924b-9faf-4947-9ebb-3cc41aef1c09 from datanode 8e4665d5-471c-4b6b-ad88-16fd9a403fe6 2023-05-31 13:58:17,895 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb093fafe8e6db9ed: from storage DS-4dfc924b-9faf-4947-9ebb-3cc41aef1c09 node DatanodeRegistration(127.0.0.1:35507, datanodeUuid=8e4665d5-471c-4b6b-ad88-16fd9a403fe6, infoPort=38493, infoSecurePort=0, ipcPort=45161, storageInfo=lv=-57;cid=testClusterID;nsid=572376699;c=1685541497611), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:58:17,936 INFO [Listener at localhost.localdomain/45161] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37691 2023-05-31 13:58:17,942 WARN [Listener at localhost.localdomain/46795] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 13:58:17,990 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x30c976dd9169d1d2: Processing first storage report for DS-072fb7c0-ab4f-49d4-a5ce-880c4e17b5b4 from datanode 14315c09-26a9-424a-80ff-83ad73033f6d 2023-05-31 13:58:17,990 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x30c976dd9169d1d2: from storage DS-072fb7c0-ab4f-49d4-a5ce-880c4e17b5b4 node DatanodeRegistration(127.0.0.1:43353, datanodeUuid=14315c09-26a9-424a-80ff-83ad73033f6d, infoPort=35841, infoSecurePort=0, ipcPort=46795, storageInfo=lv=-57;cid=testClusterID;nsid=572376699;c=1685541497611), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:58:17,990 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x30c976dd9169d1d2: Processing first storage report for DS-4bf7d15f-a5b4-4c8d-9e18-368002c8d421 from datanode 14315c09-26a9-424a-80ff-83ad73033f6d 2023-05-31 13:58:17,990 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x30c976dd9169d1d2: from storage DS-4bf7d15f-a5b4-4c8d-9e18-368002c8d421 node DatanodeRegistration(127.0.0.1:43353, datanodeUuid=14315c09-26a9-424a-80ff-83ad73033f6d, infoPort=35841, infoSecurePort=0, ipcPort=46795, storageInfo=lv=-57;cid=testClusterID;nsid=572376699;c=1685541497611), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 13:58:18,051 DEBUG [Listener at localhost.localdomain/46795] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71 2023-05-31 13:58:18,055 INFO [Listener at localhost.localdomain/46795] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/cluster_83886268-3cab-2ab6-64ec-4e744159e3f9/zookeeper_0, clientPort=61292, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/cluster_83886268-3cab-2ab6-64ec-4e744159e3f9/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/cluster_83886268-3cab-2ab6-64ec-4e744159e3f9/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-31 13:58:18,058 INFO [Listener at localhost.localdomain/46795] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=61292 2023-05-31 13:58:18,058 INFO [Listener at localhost.localdomain/46795] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:58:18,059 INFO [Listener at localhost.localdomain/46795] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:58:18,075 INFO [Listener at localhost.localdomain/46795] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e with version=8 2023-05-31 13:58:18,075 INFO [Listener at localhost.localdomain/46795] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:38351/user/jenkins/test-data/3d310d19-c276-a47a-86e8-7a7c073da0b5/hbase-staging 2023-05-31 13:58:18,076 INFO [Listener at localhost.localdomain/46795] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-05-31 13:58:18,077 INFO [Listener at localhost.localdomain/46795] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:58:18,077 INFO [Listener at localhost.localdomain/46795] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 13:58:18,077 INFO [Listener at localhost.localdomain/46795] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 13:58:18,077 INFO [Listener at localhost.localdomain/46795] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:58:18,077 INFO [Listener at localhost.localdomain/46795] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 13:58:18,077 INFO [Listener at localhost.localdomain/46795] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 13:58:18,079 INFO [Listener at localhost.localdomain/46795] ipc.NettyRpcServer(120): Bind to /136.243.18.41:33835 2023-05-31 13:58:18,080 INFO [Listener at localhost.localdomain/46795] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:58:18,081 INFO [Listener at localhost.localdomain/46795] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:58:18,082 INFO [Listener at localhost.localdomain/46795] zookeeper.RecoverableZooKeeper(93): Process identifier=master:33835 connecting to ZooKeeper ensemble=127.0.0.1:61292 2023-05-31 13:58:18,087 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:338350x0, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 13:58:18,088 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:33835-0x1008187a7e50000 connected 2023-05-31 13:58:18,097 DEBUG [Listener at localhost.localdomain/46795] zookeeper.ZKUtil(164): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 13:58:18,097 DEBUG [Listener at localhost.localdomain/46795] zookeeper.ZKUtil(164): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:58:18,098 DEBUG [Listener at localhost.localdomain/46795] zookeeper.ZKUtil(164): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 13:58:18,098 DEBUG [Listener at localhost.localdomain/46795] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33835 2023-05-31 13:58:18,098 DEBUG [Listener at localhost.localdomain/46795] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33835 2023-05-31 13:58:18,098 DEBUG [Listener at localhost.localdomain/46795] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33835 2023-05-31 13:58:18,099 DEBUG [Listener at localhost.localdomain/46795] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33835 2023-05-31 13:58:18,099 DEBUG [Listener at localhost.localdomain/46795] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33835 2023-05-31 13:58:18,099 INFO [Listener at localhost.localdomain/46795] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e, hbase.cluster.distributed=false 2023-05-31 13:58:18,115 INFO [Listener at localhost.localdomain/46795] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-05-31 13:58:18,116 INFO [Listener at localhost.localdomain/46795] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:58:18,116 INFO [Listener at localhost.localdomain/46795] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 13:58:18,116 INFO [Listener at localhost.localdomain/46795] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 13:58:18,116 INFO [Listener at localhost.localdomain/46795] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 13:58:18,116 INFO [Listener at localhost.localdomain/46795] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 13:58:18,116 INFO [Listener at localhost.localdomain/46795] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 13:58:18,117 INFO [Listener at localhost.localdomain/46795] ipc.NettyRpcServer(120): Bind to /136.243.18.41:38657 2023-05-31 13:58:18,118 INFO [Listener at localhost.localdomain/46795] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-31 13:58:18,118 DEBUG [Listener at localhost.localdomain/46795] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-31 13:58:18,119 INFO [Listener at localhost.localdomain/46795] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:58:18,119 INFO [Listener at localhost.localdomain/46795] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:58:18,120 INFO [Listener at localhost.localdomain/46795] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38657 connecting to ZooKeeper ensemble=127.0.0.1:61292 2023-05-31 13:58:18,123 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): regionserver:386570x0, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 13:58:18,124 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38657-0x1008187a7e50001 connected 2023-05-31 13:58:18,124 DEBUG [Listener at localhost.localdomain/46795] zookeeper.ZKUtil(164): regionserver:38657-0x1008187a7e50001, quorum=127.0.0.1:61292, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 13:58:18,124 DEBUG [Listener at localhost.localdomain/46795] zookeeper.ZKUtil(164): regionserver:38657-0x1008187a7e50001, quorum=127.0.0.1:61292, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:58:18,125 DEBUG [Listener at localhost.localdomain/46795] zookeeper.ZKUtil(164): regionserver:38657-0x1008187a7e50001, quorum=127.0.0.1:61292, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 13:58:18,125 DEBUG [Listener at localhost.localdomain/46795] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38657 2023-05-31 13:58:18,126 DEBUG [Listener at localhost.localdomain/46795] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38657 2023-05-31 13:58:18,126 DEBUG [Listener at localhost.localdomain/46795] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38657 2023-05-31 13:58:18,126 DEBUG [Listener at localhost.localdomain/46795] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38657 2023-05-31 13:58:18,126 DEBUG [Listener at localhost.localdomain/46795] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38657 2023-05-31 13:58:18,127 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,33835,1685541498076 2023-05-31 13:58:18,128 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 13:58:18,129 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,33835,1685541498076 2023-05-31 13:58:18,129 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 13:58:18,129 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): regionserver:38657-0x1008187a7e50001, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 13:58:18,129 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:58:18,130 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 13:58:18,131 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,33835,1685541498076 from backup master directory 2023-05-31 13:58:18,131 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 13:58:18,131 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,33835,1685541498076 2023-05-31 13:58:18,132 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 13:58:18,132 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 13:58:18,132 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,33835,1685541498076 2023-05-31 13:58:18,141 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/hbase.id with ID: b5b41513-9a6c-44f6-81cb-0c1c0a6b969a 2023-05-31 13:58:18,150 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:58:18,152 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:58:18,161 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x0b1027f0 to 127.0.0.1:61292 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 13:58:18,165 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@686de4f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 13:58:18,165 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 13:58:18,165 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-31 13:58:18,166 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 13:58:18,167 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/MasterData/data/master/store-tmp 2023-05-31 13:58:18,175 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:58:18,175 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 13:58:18,175 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:58:18,175 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:58:18,175 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 13:58:18,175 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:58:18,175 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:58:18,175 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 13:58:18,176 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/MasterData/WALs/jenkins-hbase17.apache.org,33835,1685541498076 2023-05-31 13:58:18,180 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C33835%2C1685541498076, suffix=, logDir=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/MasterData/WALs/jenkins-hbase17.apache.org,33835,1685541498076, archiveDir=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/MasterData/oldWALs, maxLogs=10 2023-05-31 13:58:18,188 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/MasterData/WALs/jenkins-hbase17.apache.org,33835,1685541498076/jenkins-hbase17.apache.org%2C33835%2C1685541498076.1685541498180 2023-05-31 13:58:18,188 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43353,DS-072fb7c0-ab4f-49d4-a5ce-880c4e17b5b4,DISK], DatanodeInfoWithStorage[127.0.0.1:35507,DS-fbd4b6eb-53b6-4f9f-bd8a-338617f4ebe1,DISK]] 2023-05-31 13:58:18,188 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-31 13:58:18,189 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:58:18,189 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:58:18,189 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:58:18,191 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:58:18,192 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-31 13:58:18,192 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-31 13:58:18,193 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:58:18,194 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:58:18,194 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:58:18,196 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 13:58:18,198 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 13:58:18,198 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=716869, jitterRate=-0.08845454454421997}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 13:58:18,198 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 13:58:18,198 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-31 13:58:18,199 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-31 13:58:18,200 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-31 13:58:18,200 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-31 13:58:18,200 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-31 13:58:18,201 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-31 13:58:18,201 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-31 13:58:18,201 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-31 13:58:18,202 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-31 13:58:18,215 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-31 13:58:18,215 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-31 13:58:18,216 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-31 13:58:18,216 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-31 13:58:18,216 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-31 13:58:18,218 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:58:18,218 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-31 13:58:18,218 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-31 13:58:18,219 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-31 13:58:18,220 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 13:58:18,220 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): regionserver:38657-0x1008187a7e50001, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 13:58:18,220 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:58:18,220 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,33835,1685541498076, sessionid=0x1008187a7e50000, setting cluster-up flag (Was=false) 2023-05-31 13:58:18,223 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:58:18,225 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-31 13:58:18,226 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,33835,1685541498076 2023-05-31 13:58:18,227 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:58:18,230 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-31 13:58:18,230 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,33835,1685541498076 2023-05-31 13:58:18,231 WARN [master/jenkins-hbase17:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/.hbase-snapshot/.tmp 2023-05-31 13:58:18,233 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-31 13:58:18,233 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-05-31 13:58:18,233 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-05-31 13:58:18,233 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-05-31 13:58:18,233 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-05-31 13:58:18,233 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-05-31 13:58:18,233 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:58:18,234 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-05-31 13:58:18,234 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:58:18,236 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685541528235 2023-05-31 13:58:18,236 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-31 13:58:18,236 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-31 13:58:18,236 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-31 13:58:18,236 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-31 13:58:18,236 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-31 13:58:18,236 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-31 13:58:18,236 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 13:58:18,237 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-31 13:58:18,237 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-31 13:58:18,237 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-31 13:58:18,237 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 13:58:18,237 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-31 13:58:18,237 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-31 13:58:18,237 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-31 13:58:18,237 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1685541498237,5,FailOnTimeoutGroup] 2023-05-31 13:58:18,238 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 13:58:18,240 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1685541498237,5,FailOnTimeoutGroup] 2023-05-31 13:58:18,241 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 13:58:18,241 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-31 13:58:18,241 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-31 13:58:18,241 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-31 13:58:18,247 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 13:58:18,247 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 13:58:18,247 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e 2023-05-31 13:58:18,253 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:58:18,254 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 13:58:18,255 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/1588230740/info 2023-05-31 13:58:18,255 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 13:58:18,255 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:58:18,255 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 13:58:18,256 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/1588230740/rep_barrier 2023-05-31 13:58:18,256 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 13:58:18,257 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:58:18,257 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 13:58:18,258 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/1588230740/table 2023-05-31 13:58:18,258 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 13:58:18,258 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:58:18,259 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/1588230740 2023-05-31 13:58:18,259 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/1588230740 2023-05-31 13:58:18,261 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 13:58:18,262 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 13:58:18,264 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 13:58:18,264 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=708404, jitterRate=-0.09921865165233612}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 13:58:18,264 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 13:58:18,264 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 13:58:18,264 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 13:58:18,265 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 13:58:18,265 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 13:58:18,265 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 13:58:18,265 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 13:58:18,265 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 13:58:18,266 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 13:58:18,266 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-31 13:58:18,266 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-31 13:58:18,267 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-31 13:58:18,269 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-31 13:58:18,329 INFO [RS:0;jenkins-hbase17:38657] regionserver.HRegionServer(951): ClusterId : b5b41513-9a6c-44f6-81cb-0c1c0a6b969a 2023-05-31 13:58:18,330 DEBUG [RS:0;jenkins-hbase17:38657] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-31 13:58:18,334 DEBUG [RS:0;jenkins-hbase17:38657] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-31 13:58:18,334 DEBUG [RS:0;jenkins-hbase17:38657] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-31 13:58:18,337 DEBUG [RS:0;jenkins-hbase17:38657] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-31 13:58:18,338 DEBUG [RS:0;jenkins-hbase17:38657] zookeeper.ReadOnlyZKClient(139): Connect 0x7117ec1d to 127.0.0.1:61292 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 13:58:18,342 DEBUG [RS:0;jenkins-hbase17:38657] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7fd01c4f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 13:58:18,342 DEBUG [RS:0;jenkins-hbase17:38657] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7bef876d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-05-31 13:58:18,351 DEBUG [RS:0;jenkins-hbase17:38657] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:38657 2023-05-31 13:58:18,352 INFO [RS:0;jenkins-hbase17:38657] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-31 13:58:18,352 INFO [RS:0;jenkins-hbase17:38657] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-31 13:58:18,352 DEBUG [RS:0;jenkins-hbase17:38657] regionserver.HRegionServer(1022): About to register with Master. 2023-05-31 13:58:18,352 INFO [RS:0;jenkins-hbase17:38657] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase17.apache.org,33835,1685541498076 with isa=jenkins-hbase17.apache.org/136.243.18.41:38657, startcode=1685541498115 2023-05-31 13:58:18,352 DEBUG [RS:0;jenkins-hbase17:38657] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-31 13:58:18,355 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:48771, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-05-31 13:58:18,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33835] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,38657,1685541498115 2023-05-31 13:58:18,356 DEBUG [RS:0;jenkins-hbase17:38657] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e 2023-05-31 13:58:18,356 DEBUG [RS:0;jenkins-hbase17:38657] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36575 2023-05-31 13:58:18,356 DEBUG [RS:0;jenkins-hbase17:38657] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-31 13:58:18,357 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 13:58:18,358 DEBUG [RS:0;jenkins-hbase17:38657] zookeeper.ZKUtil(162): regionserver:38657-0x1008187a7e50001, quorum=127.0.0.1:61292, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38657,1685541498115 2023-05-31 13:58:18,358 WARN [RS:0;jenkins-hbase17:38657] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 13:58:18,358 INFO [RS:0;jenkins-hbase17:38657] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 13:58:18,358 DEBUG [RS:0;jenkins-hbase17:38657] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/WALs/jenkins-hbase17.apache.org,38657,1685541498115 2023-05-31 13:58:18,358 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,38657,1685541498115] 2023-05-31 13:58:18,361 DEBUG [RS:0;jenkins-hbase17:38657] zookeeper.ZKUtil(162): regionserver:38657-0x1008187a7e50001, quorum=127.0.0.1:61292, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38657,1685541498115 2023-05-31 13:58:18,362 DEBUG [RS:0;jenkins-hbase17:38657] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-31 13:58:18,362 INFO [RS:0;jenkins-hbase17:38657] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-31 13:58:18,363 INFO [RS:0;jenkins-hbase17:38657] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-31 13:58:18,364 INFO [RS:0;jenkins-hbase17:38657] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-31 13:58:18,364 INFO [RS:0;jenkins-hbase17:38657] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 13:58:18,364 INFO [RS:0;jenkins-hbase17:38657] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-31 13:58:18,366 INFO [RS:0;jenkins-hbase17:38657] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-31 13:58:18,366 DEBUG [RS:0;jenkins-hbase17:38657] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:58:18,366 DEBUG [RS:0;jenkins-hbase17:38657] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:58:18,366 DEBUG [RS:0;jenkins-hbase17:38657] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:58:18,366 DEBUG [RS:0;jenkins-hbase17:38657] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:58:18,366 DEBUG [RS:0;jenkins-hbase17:38657] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:58:18,366 DEBUG [RS:0;jenkins-hbase17:38657] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-05-31 13:58:18,366 DEBUG [RS:0;jenkins-hbase17:38657] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:58:18,366 DEBUG [RS:0;jenkins-hbase17:38657] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:58:18,366 DEBUG [RS:0;jenkins-hbase17:38657] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:58:18,366 DEBUG [RS:0;jenkins-hbase17:38657] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-05-31 13:58:18,367 INFO [RS:0;jenkins-hbase17:38657] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 13:58:18,367 INFO [RS:0;jenkins-hbase17:38657] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 13:58:18,367 INFO [RS:0;jenkins-hbase17:38657] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-31 13:58:18,377 INFO [RS:0;jenkins-hbase17:38657] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-31 13:58:18,377 INFO [RS:0;jenkins-hbase17:38657] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38657,1685541498115-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 13:58:18,385 INFO [RS:0;jenkins-hbase17:38657] regionserver.Replication(203): jenkins-hbase17.apache.org,38657,1685541498115 started 2023-05-31 13:58:18,385 INFO [RS:0;jenkins-hbase17:38657] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,38657,1685541498115, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:38657, sessionid=0x1008187a7e50001 2023-05-31 13:58:18,385 DEBUG [RS:0;jenkins-hbase17:38657] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-31 13:58:18,385 DEBUG [RS:0;jenkins-hbase17:38657] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,38657,1685541498115 2023-05-31 13:58:18,385 DEBUG [RS:0;jenkins-hbase17:38657] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,38657,1685541498115' 2023-05-31 13:58:18,385 DEBUG [RS:0;jenkins-hbase17:38657] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 13:58:18,386 DEBUG [RS:0;jenkins-hbase17:38657] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 13:58:18,386 DEBUG [RS:0;jenkins-hbase17:38657] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-31 13:58:18,386 DEBUG [RS:0;jenkins-hbase17:38657] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-31 13:58:18,386 DEBUG [RS:0;jenkins-hbase17:38657] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,38657,1685541498115 2023-05-31 13:58:18,386 DEBUG [RS:0;jenkins-hbase17:38657] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,38657,1685541498115' 2023-05-31 13:58:18,386 DEBUG [RS:0;jenkins-hbase17:38657] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-31 13:58:18,387 DEBUG [RS:0;jenkins-hbase17:38657] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-31 13:58:18,387 DEBUG [RS:0;jenkins-hbase17:38657] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-31 13:58:18,387 INFO [RS:0;jenkins-hbase17:38657] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-31 13:58:18,387 INFO [RS:0;jenkins-hbase17:38657] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-31 13:58:18,419 DEBUG [jenkins-hbase17:33835] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-31 13:58:18,420 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,38657,1685541498115, state=OPENING 2023-05-31 13:58:18,421 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-31 13:58:18,422 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:58:18,423 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 13:58:18,423 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,38657,1685541498115}] 2023-05-31 13:58:18,489 INFO [RS:0;jenkins-hbase17:38657] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C38657%2C1685541498115, suffix=, logDir=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/WALs/jenkins-hbase17.apache.org,38657,1685541498115, archiveDir=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/oldWALs, maxLogs=32 2023-05-31 13:58:18,499 INFO [RS:0;jenkins-hbase17:38657] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/WALs/jenkins-hbase17.apache.org,38657,1685541498115/jenkins-hbase17.apache.org%2C38657%2C1685541498115.1685541498490 2023-05-31 13:58:18,499 DEBUG [RS:0;jenkins-hbase17:38657] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35507,DS-fbd4b6eb-53b6-4f9f-bd8a-338617f4ebe1,DISK], DatanodeInfoWithStorage[127.0.0.1:43353,DS-072fb7c0-ab4f-49d4-a5ce-880c4e17b5b4,DISK]] 2023-05-31 13:58:18,579 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,38657,1685541498115 2023-05-31 13:58:18,579 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-31 13:58:18,583 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:36124, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-31 13:58:18,588 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-31 13:58:18,588 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 13:58:18,591 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C38657%2C1685541498115.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/WALs/jenkins-hbase17.apache.org,38657,1685541498115, archiveDir=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/oldWALs, maxLogs=32 2023-05-31 13:58:18,599 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/WALs/jenkins-hbase17.apache.org,38657,1685541498115/jenkins-hbase17.apache.org%2C38657%2C1685541498115.meta.1685541498592.meta 2023-05-31 13:58:18,599 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35507,DS-fbd4b6eb-53b6-4f9f-bd8a-338617f4ebe1,DISK], DatanodeInfoWithStorage[127.0.0.1:43353,DS-072fb7c0-ab4f-49d4-a5ce-880c4e17b5b4,DISK]] 2023-05-31 13:58:18,599 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-31 13:58:18,599 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-31 13:58:18,599 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-31 13:58:18,599 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-31 13:58:18,599 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-31 13:58:18,599 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:58:18,600 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-31 13:58:18,600 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-31 13:58:18,601 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 13:58:18,602 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/1588230740/info 2023-05-31 13:58:18,602 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/1588230740/info 2023-05-31 13:58:18,602 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 13:58:18,602 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:58:18,603 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 13:58:18,603 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/1588230740/rep_barrier 2023-05-31 13:58:18,603 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/1588230740/rep_barrier 2023-05-31 13:58:18,604 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 13:58:18,604 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:58:18,604 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 13:58:18,605 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/1588230740/table 2023-05-31 13:58:18,605 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/1588230740/table 2023-05-31 13:58:18,605 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 13:58:18,606 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:58:18,607 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/1588230740 2023-05-31 13:58:18,607 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/1588230740 2023-05-31 13:58:18,609 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 13:58:18,610 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 13:58:18,611 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=859667, jitterRate=0.09312354028224945}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 13:58:18,611 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 13:58:18,614 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685541498579 2023-05-31 13:58:18,617 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-31 13:58:18,617 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-31 13:58:18,617 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,38657,1685541498115, state=OPEN 2023-05-31 13:58:18,618 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-31 13:58:18,619 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 13:58:18,620 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-31 13:58:18,620 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,38657,1685541498115 in 195 msec 2023-05-31 13:58:18,622 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-31 13:58:18,622 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 354 msec 2023-05-31 13:58:18,624 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 391 msec 2023-05-31 13:58:18,624 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685541498624, completionTime=-1 2023-05-31 13:58:18,624 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-31 13:58:18,624 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-31 13:58:18,627 DEBUG [hconnection-0x1464c3be-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 13:58:18,629 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:36136, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 13:58:18,631 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-31 13:58:18,631 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685541558631 2023-05-31 13:58:18,631 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685541618631 2023-05-31 13:58:18,631 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-31 13:58:18,638 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,33835,1685541498076-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 13:58:18,638 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,33835,1685541498076-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 13:58:18,638 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,33835,1685541498076-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 13:58:18,638 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:33835, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 13:58:18,638 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-31 13:58:18,638 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-31 13:58:18,639 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 13:58:18,640 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-31 13:58:18,640 DEBUG [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-31 13:58:18,641 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 13:58:18,642 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 13:58:18,644 DEBUG [HFileArchiver-11] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/.tmp/data/hbase/namespace/5a1817f7cceff866eb55dc287d7e4a22 2023-05-31 13:58:18,644 DEBUG [HFileArchiver-11] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/.tmp/data/hbase/namespace/5a1817f7cceff866eb55dc287d7e4a22 empty. 2023-05-31 13:58:18,644 DEBUG [HFileArchiver-11] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/.tmp/data/hbase/namespace/5a1817f7cceff866eb55dc287d7e4a22 2023-05-31 13:58:18,645 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-31 13:58:18,652 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-31 13:58:18,653 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5a1817f7cceff866eb55dc287d7e4a22, NAME => 'hbase:namespace,,1685541498638.5a1817f7cceff866eb55dc287d7e4a22.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/.tmp 2023-05-31 13:58:18,660 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685541498638.5a1817f7cceff866eb55dc287d7e4a22.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:58:18,660 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 5a1817f7cceff866eb55dc287d7e4a22, disabling compactions & flushes 2023-05-31 13:58:18,660 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685541498638.5a1817f7cceff866eb55dc287d7e4a22. 2023-05-31 13:58:18,660 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685541498638.5a1817f7cceff866eb55dc287d7e4a22. 2023-05-31 13:58:18,660 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685541498638.5a1817f7cceff866eb55dc287d7e4a22. after waiting 0 ms 2023-05-31 13:58:18,660 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685541498638.5a1817f7cceff866eb55dc287d7e4a22. 2023-05-31 13:58:18,660 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685541498638.5a1817f7cceff866eb55dc287d7e4a22. 2023-05-31 13:58:18,660 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 5a1817f7cceff866eb55dc287d7e4a22: 2023-05-31 13:58:18,662 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 13:58:18,664 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685541498638.5a1817f7cceff866eb55dc287d7e4a22.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685541498663"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685541498663"}]},"ts":"1685541498663"} 2023-05-31 13:58:18,666 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 13:58:18,667 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 13:58:18,667 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685541498667"}]},"ts":"1685541498667"} 2023-05-31 13:58:18,668 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-31 13:58:18,675 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=5a1817f7cceff866eb55dc287d7e4a22, ASSIGN}] 2023-05-31 13:58:18,678 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=5a1817f7cceff866eb55dc287d7e4a22, ASSIGN 2023-05-31 13:58:18,679 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=5a1817f7cceff866eb55dc287d7e4a22, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,38657,1685541498115; forceNewPlan=false, retain=false 2023-05-31 13:58:18,830 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=5a1817f7cceff866eb55dc287d7e4a22, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,38657,1685541498115 2023-05-31 13:58:18,830 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685541498638.5a1817f7cceff866eb55dc287d7e4a22.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685541498830"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685541498830"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685541498830"}]},"ts":"1685541498830"} 2023-05-31 13:58:18,832 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 5a1817f7cceff866eb55dc287d7e4a22, server=jenkins-hbase17.apache.org,38657,1685541498115}] 2023-05-31 13:58:18,988 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685541498638.5a1817f7cceff866eb55dc287d7e4a22. 2023-05-31 13:58:18,989 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5a1817f7cceff866eb55dc287d7e4a22, NAME => 'hbase:namespace,,1685541498638.5a1817f7cceff866eb55dc287d7e4a22.', STARTKEY => '', ENDKEY => ''} 2023-05-31 13:58:18,989 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 5a1817f7cceff866eb55dc287d7e4a22 2023-05-31 13:58:18,989 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685541498638.5a1817f7cceff866eb55dc287d7e4a22.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 13:58:18,989 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 5a1817f7cceff866eb55dc287d7e4a22 2023-05-31 13:58:18,989 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 5a1817f7cceff866eb55dc287d7e4a22 2023-05-31 13:58:18,992 INFO [StoreOpener-5a1817f7cceff866eb55dc287d7e4a22-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 5a1817f7cceff866eb55dc287d7e4a22 2023-05-31 13:58:18,993 DEBUG [StoreOpener-5a1817f7cceff866eb55dc287d7e4a22-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/namespace/5a1817f7cceff866eb55dc287d7e4a22/info 2023-05-31 13:58:18,993 DEBUG [StoreOpener-5a1817f7cceff866eb55dc287d7e4a22-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/namespace/5a1817f7cceff866eb55dc287d7e4a22/info 2023-05-31 13:58:18,993 INFO [StoreOpener-5a1817f7cceff866eb55dc287d7e4a22-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5a1817f7cceff866eb55dc287d7e4a22 columnFamilyName info 2023-05-31 13:58:18,994 INFO [StoreOpener-5a1817f7cceff866eb55dc287d7e4a22-1] regionserver.HStore(310): Store=5a1817f7cceff866eb55dc287d7e4a22/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 13:58:18,995 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/namespace/5a1817f7cceff866eb55dc287d7e4a22 2023-05-31 13:58:18,995 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/namespace/5a1817f7cceff866eb55dc287d7e4a22 2023-05-31 13:58:18,997 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 5a1817f7cceff866eb55dc287d7e4a22 2023-05-31 13:58:18,999 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/namespace/5a1817f7cceff866eb55dc287d7e4a22/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 13:58:19,000 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 5a1817f7cceff866eb55dc287d7e4a22; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=716644, jitterRate=-0.08874112367630005}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 13:58:19,000 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 5a1817f7cceff866eb55dc287d7e4a22: 2023-05-31 13:58:19,002 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685541498638.5a1817f7cceff866eb55dc287d7e4a22., pid=6, masterSystemTime=1685541498985 2023-05-31 13:58:19,004 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685541498638.5a1817f7cceff866eb55dc287d7e4a22. 2023-05-31 13:58:19,004 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685541498638.5a1817f7cceff866eb55dc287d7e4a22. 2023-05-31 13:58:19,005 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=5a1817f7cceff866eb55dc287d7e4a22, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,38657,1685541498115 2023-05-31 13:58:19,005 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685541498638.5a1817f7cceff866eb55dc287d7e4a22.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685541499005"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685541499005"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685541499005"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685541499005"}]},"ts":"1685541499005"} 2023-05-31 13:58:19,009 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-31 13:58:19,009 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 5a1817f7cceff866eb55dc287d7e4a22, server=jenkins-hbase17.apache.org,38657,1685541498115 in 175 msec 2023-05-31 13:58:19,011 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-31 13:58:19,011 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=5a1817f7cceff866eb55dc287d7e4a22, ASSIGN in 336 msec 2023-05-31 13:58:19,012 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 13:58:19,012 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685541499012"}]},"ts":"1685541499012"} 2023-05-31 13:58:19,013 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-31 13:58:19,015 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 13:58:19,017 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 376 msec 2023-05-31 13:58:19,041 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-31 13:58:19,042 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-31 13:58:19,042 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:58:19,046 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-31 13:58:19,055 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 13:58:19,058 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-05-31 13:58:19,068 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-31 13:58:19,075 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 13:58:19,080 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-05-31 13:58:19,092 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-31 13:58:19,094 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-31 13:58:19,094 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.962sec 2023-05-31 13:58:19,094 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-31 13:58:19,094 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-31 13:58:19,094 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-31 13:58:19,094 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,33835,1685541498076-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-31 13:58:19,094 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,33835,1685541498076-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-31 13:58:19,096 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-31 13:58:19,129 DEBUG [Listener at localhost.localdomain/46795] zookeeper.ReadOnlyZKClient(139): Connect 0x27e012c8 to 127.0.0.1:61292 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 13:58:19,135 DEBUG [Listener at localhost.localdomain/46795] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@49db4df7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 13:58:19,136 DEBUG [hconnection-0x60ddee16-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 13:58:19,138 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:36140, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 13:58:19,139 INFO [Listener at localhost.localdomain/46795] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase17.apache.org,33835,1685541498076 2023-05-31 13:58:19,139 INFO [Listener at localhost.localdomain/46795] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 13:58:19,141 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-31 13:58:19,141 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:58:19,142 INFO [Listener at localhost.localdomain/46795] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-31 13:58:19,142 INFO [Listener at localhost.localdomain/46795] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 13:58:19,144 INFO [Listener at localhost.localdomain/46795] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=test.com%2C8080%2C1, suffix=, logDir=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/WALs/test.com,8080,1, archiveDir=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/oldWALs, maxLogs=32 2023-05-31 13:58:19,153 INFO [Listener at localhost.localdomain/46795] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/WALs/test.com,8080,1/test.com%2C8080%2C1.1685541499145 2023-05-31 13:58:19,153 DEBUG [Listener at localhost.localdomain/46795] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35507,DS-fbd4b6eb-53b6-4f9f-bd8a-338617f4ebe1,DISK], DatanodeInfoWithStorage[127.0.0.1:43353,DS-072fb7c0-ab4f-49d4-a5ce-880c4e17b5b4,DISK]] 2023-05-31 13:58:19,171 INFO [Listener at localhost.localdomain/46795] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/WALs/test.com,8080,1/test.com%2C8080%2C1.1685541499145 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/WALs/test.com,8080,1/test.com%2C8080%2C1.1685541499153 2023-05-31 13:58:19,172 DEBUG [Listener at localhost.localdomain/46795] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43353,DS-072fb7c0-ab4f-49d4-a5ce-880c4e17b5b4,DISK], DatanodeInfoWithStorage[127.0.0.1:35507,DS-fbd4b6eb-53b6-4f9f-bd8a-338617f4ebe1,DISK]] 2023-05-31 13:58:19,172 DEBUG [Listener at localhost.localdomain/46795] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/WALs/test.com,8080,1/test.com%2C8080%2C1.1685541499145 is not closed yet, will try archiving it next time 2023-05-31 13:58:19,172 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/WALs/test.com,8080,1 2023-05-31 13:58:19,189 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/WALs/test.com,8080,1/test.com%2C8080%2C1.1685541499145 to hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/oldWALs/test.com%2C8080%2C1.1685541499145 2023-05-31 13:58:19,191 DEBUG [Listener at localhost.localdomain/46795] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/oldWALs 2023-05-31 13:58:19,192 INFO [Listener at localhost.localdomain/46795] wal.AbstractFSWAL(1031): Closed WAL: FSHLog test.com%2C8080%2C1:(num 1685541499153) 2023-05-31 13:58:19,192 INFO [Listener at localhost.localdomain/46795] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-31 13:58:19,192 DEBUG [Listener at localhost.localdomain/46795] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x27e012c8 to 127.0.0.1:61292 2023-05-31 13:58:19,192 DEBUG [Listener at localhost.localdomain/46795] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:58:19,193 DEBUG [Listener at localhost.localdomain/46795] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-31 13:58:19,193 DEBUG [Listener at localhost.localdomain/46795] util.JVMClusterUtil(257): Found active master hash=187635917, stopped=false 2023-05-31 13:58:19,193 INFO [Listener at localhost.localdomain/46795] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,33835,1685541498076 2023-05-31 13:58:19,195 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): regionserver:38657-0x1008187a7e50001, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 13:58:19,195 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 13:58:19,195 INFO [Listener at localhost.localdomain/46795] procedure2.ProcedureExecutor(629): Stopping 2023-05-31 13:58:19,195 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:58:19,196 DEBUG [Listener at localhost.localdomain/46795] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0b1027f0 to 127.0.0.1:61292 2023-05-31 13:58:19,196 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:58:19,196 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38657-0x1008187a7e50001, quorum=127.0.0.1:61292, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 13:58:19,197 DEBUG [Listener at localhost.localdomain/46795] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:58:19,197 INFO [Listener at localhost.localdomain/46795] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,38657,1685541498115' ***** 2023-05-31 13:58:19,197 INFO [Listener at localhost.localdomain/46795] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-31 13:58:19,197 INFO [RS:0;jenkins-hbase17:38657] regionserver.HeapMemoryManager(220): Stopping 2023-05-31 13:58:19,197 INFO [RS:0;jenkins-hbase17:38657] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-31 13:58:19,197 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-31 13:58:19,197 INFO [RS:0;jenkins-hbase17:38657] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-31 13:58:19,198 INFO [RS:0;jenkins-hbase17:38657] regionserver.HRegionServer(3303): Received CLOSE for 5a1817f7cceff866eb55dc287d7e4a22 2023-05-31 13:58:19,198 INFO [RS:0;jenkins-hbase17:38657] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,38657,1685541498115 2023-05-31 13:58:19,198 DEBUG [RS:0;jenkins-hbase17:38657] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7117ec1d to 127.0.0.1:61292 2023-05-31 13:58:19,199 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 5a1817f7cceff866eb55dc287d7e4a22, disabling compactions & flushes 2023-05-31 13:58:19,199 DEBUG [RS:0;jenkins-hbase17:38657] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:58:19,199 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685541498638.5a1817f7cceff866eb55dc287d7e4a22. 2023-05-31 13:58:19,199 INFO [RS:0;jenkins-hbase17:38657] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-31 13:58:19,199 INFO [RS:0;jenkins-hbase17:38657] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-31 13:58:19,199 INFO [RS:0;jenkins-hbase17:38657] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-31 13:58:19,199 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685541498638.5a1817f7cceff866eb55dc287d7e4a22. 2023-05-31 13:58:19,199 INFO [RS:0;jenkins-hbase17:38657] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-31 13:58:19,199 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685541498638.5a1817f7cceff866eb55dc287d7e4a22. after waiting 0 ms 2023-05-31 13:58:19,199 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685541498638.5a1817f7cceff866eb55dc287d7e4a22. 2023-05-31 13:58:19,199 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 5a1817f7cceff866eb55dc287d7e4a22 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-31 13:58:19,199 INFO [RS:0;jenkins-hbase17:38657] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-05-31 13:58:19,199 DEBUG [RS:0;jenkins-hbase17:38657] regionserver.HRegionServer(1478): Online Regions={5a1817f7cceff866eb55dc287d7e4a22=hbase:namespace,,1685541498638.5a1817f7cceff866eb55dc287d7e4a22., 1588230740=hbase:meta,,1.1588230740} 2023-05-31 13:58:19,199 DEBUG [RS:0;jenkins-hbase17:38657] regionserver.HRegionServer(1504): Waiting on 1588230740, 5a1817f7cceff866eb55dc287d7e4a22 2023-05-31 13:58:19,200 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 13:58:19,200 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 13:58:19,200 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 13:58:19,200 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 13:58:19,200 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 13:58:19,200 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=1.26 KB heapSize=2.89 KB 2023-05-31 13:58:19,212 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/namespace/5a1817f7cceff866eb55dc287d7e4a22/.tmp/info/51456f35a65b450b88cdea289a52759e 2023-05-31 13:58:19,212 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.17 KB at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/1588230740/.tmp/info/0ed72f960dd84be884cdb7d2912067f7 2023-05-31 13:58:19,218 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/namespace/5a1817f7cceff866eb55dc287d7e4a22/.tmp/info/51456f35a65b450b88cdea289a52759e as hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/namespace/5a1817f7cceff866eb55dc287d7e4a22/info/51456f35a65b450b88cdea289a52759e 2023-05-31 13:58:19,223 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/namespace/5a1817f7cceff866eb55dc287d7e4a22/info/51456f35a65b450b88cdea289a52759e, entries=2, sequenceid=6, filesize=4.8 K 2023-05-31 13:58:19,225 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 5a1817f7cceff866eb55dc287d7e4a22 in 26ms, sequenceid=6, compaction requested=false 2023-05-31 13:58:19,225 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-31 13:58:19,233 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=94 B at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/1588230740/.tmp/table/f02b5ca61bf94ce8bc1a5e18e729f2aa 2023-05-31 13:58:19,235 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/namespace/5a1817f7cceff866eb55dc287d7e4a22/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-31 13:58:19,236 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685541498638.5a1817f7cceff866eb55dc287d7e4a22. 2023-05-31 13:58:19,236 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 5a1817f7cceff866eb55dc287d7e4a22: 2023-05-31 13:58:19,236 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685541498638.5a1817f7cceff866eb55dc287d7e4a22. 2023-05-31 13:58:19,238 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/1588230740/.tmp/info/0ed72f960dd84be884cdb7d2912067f7 as hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/1588230740/info/0ed72f960dd84be884cdb7d2912067f7 2023-05-31 13:58:19,243 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/1588230740/info/0ed72f960dd84be884cdb7d2912067f7, entries=10, sequenceid=9, filesize=5.9 K 2023-05-31 13:58:19,243 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/1588230740/.tmp/table/f02b5ca61bf94ce8bc1a5e18e729f2aa as hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/1588230740/table/f02b5ca61bf94ce8bc1a5e18e729f2aa 2023-05-31 13:58:19,247 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/1588230740/table/f02b5ca61bf94ce8bc1a5e18e729f2aa, entries=2, sequenceid=9, filesize=4.7 K 2023-05-31 13:58:19,248 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.26 KB/1292, heapSize ~2.61 KB/2672, currentSize=0 B/0 for 1588230740 in 48ms, sequenceid=9, compaction requested=false 2023-05-31 13:58:19,248 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-31 13:58:19,254 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/data/hbase/meta/1588230740/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-05-31 13:58:19,254 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-31 13:58:19,254 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 13:58:19,254 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 13:58:19,254 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-31 13:58:19,367 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-05-31 13:58:19,368 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-05-31 13:58:19,399 INFO [RS:0;jenkins-hbase17:38657] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,38657,1685541498115; all regions closed. 2023-05-31 13:58:19,400 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/WALs/jenkins-hbase17.apache.org,38657,1685541498115 2023-05-31 13:58:19,410 DEBUG [RS:0;jenkins-hbase17:38657] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/oldWALs 2023-05-31 13:58:19,410 INFO [RS:0;jenkins-hbase17:38657] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase17.apache.org%2C38657%2C1685541498115.meta:.meta(num 1685541498592) 2023-05-31 13:58:19,410 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/WALs/jenkins-hbase17.apache.org,38657,1685541498115 2023-05-31 13:58:19,415 DEBUG [RS:0;jenkins-hbase17:38657] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/oldWALs 2023-05-31 13:58:19,415 INFO [RS:0;jenkins-hbase17:38657] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase17.apache.org%2C38657%2C1685541498115:(num 1685541498490) 2023-05-31 13:58:19,415 DEBUG [RS:0;jenkins-hbase17:38657] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:58:19,415 INFO [RS:0;jenkins-hbase17:38657] regionserver.LeaseManager(133): Closed leases 2023-05-31 13:58:19,415 INFO [RS:0;jenkins-hbase17:38657] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-31 13:58:19,415 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 13:58:19,416 INFO [RS:0;jenkins-hbase17:38657] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:38657 2023-05-31 13:58:19,418 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): regionserver:38657-0x1008187a7e50001, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,38657,1685541498115 2023-05-31 13:58:19,418 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): regionserver:38657-0x1008187a7e50001, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 13:58:19,418 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 13:58:19,419 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,38657,1685541498115] 2023-05-31 13:58:19,419 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,38657,1685541498115; numProcessing=1 2023-05-31 13:58:19,419 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,38657,1685541498115 already deleted, retry=false 2023-05-31 13:58:19,420 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,38657,1685541498115 expired; onlineServers=0 2023-05-31 13:58:19,420 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,33835,1685541498076' ***** 2023-05-31 13:58:19,420 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-31 13:58:19,420 DEBUG [M:0;jenkins-hbase17:33835] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@f108d70, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-05-31 13:58:19,420 INFO [M:0;jenkins-hbase17:33835] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,33835,1685541498076 2023-05-31 13:58:19,420 INFO [M:0;jenkins-hbase17:33835] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,33835,1685541498076; all regions closed. 2023-05-31 13:58:19,420 DEBUG [M:0;jenkins-hbase17:33835] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 13:58:19,420 DEBUG [M:0;jenkins-hbase17:33835] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-31 13:58:19,420 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-31 13:58:19,420 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1685541498237] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1685541498237,5,FailOnTimeoutGroup] 2023-05-31 13:58:19,420 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1685541498237] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1685541498237,5,FailOnTimeoutGroup] 2023-05-31 13:58:19,420 DEBUG [M:0;jenkins-hbase17:33835] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-31 13:58:19,422 INFO [M:0;jenkins-hbase17:33835] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-31 13:58:19,422 INFO [M:0;jenkins-hbase17:33835] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-31 13:58:19,422 INFO [M:0;jenkins-hbase17:33835] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [] on shutdown 2023-05-31 13:58:19,422 DEBUG [M:0;jenkins-hbase17:33835] master.HMaster(1512): Stopping service threads 2023-05-31 13:58:19,422 INFO [M:0;jenkins-hbase17:33835] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-31 13:58:19,423 ERROR [M:0;jenkins-hbase17:33835] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-11,5,PEWorkerGroup] 2023-05-31 13:58:19,423 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-31 13:58:19,423 INFO [M:0;jenkins-hbase17:33835] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-31 13:58:19,423 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 13:58:19,423 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-31 13:58:19,423 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 13:58:19,423 DEBUG [M:0;jenkins-hbase17:33835] zookeeper.ZKUtil(398): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-31 13:58:19,423 WARN [M:0;jenkins-hbase17:33835] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-31 13:58:19,423 INFO [M:0;jenkins-hbase17:33835] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-31 13:58:19,424 INFO [M:0;jenkins-hbase17:33835] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-31 13:58:19,424 DEBUG [M:0;jenkins-hbase17:33835] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 13:58:19,424 INFO [M:0;jenkins-hbase17:33835] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:58:19,424 DEBUG [M:0;jenkins-hbase17:33835] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:58:19,424 DEBUG [M:0;jenkins-hbase17:33835] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 13:58:19,424 DEBUG [M:0;jenkins-hbase17:33835] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:58:19,425 INFO [M:0;jenkins-hbase17:33835] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=24.09 KB heapSize=29.59 KB 2023-05-31 13:58:19,434 INFO [M:0;jenkins-hbase17:33835] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.09 KB at sequenceid=66 (bloomFilter=true), to=hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/b7c58ef03ee74fa1b33d157888239697 2023-05-31 13:58:19,438 DEBUG [M:0;jenkins-hbase17:33835] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/b7c58ef03ee74fa1b33d157888239697 as hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/b7c58ef03ee74fa1b33d157888239697 2023-05-31 13:58:19,441 INFO [M:0;jenkins-hbase17:33835] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36575/user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/b7c58ef03ee74fa1b33d157888239697, entries=8, sequenceid=66, filesize=6.3 K 2023-05-31 13:58:19,442 INFO [M:0;jenkins-hbase17:33835] regionserver.HRegion(2948): Finished flush of dataSize ~24.09 KB/24669, heapSize ~29.57 KB/30280, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 18ms, sequenceid=66, compaction requested=false 2023-05-31 13:58:19,443 INFO [M:0;jenkins-hbase17:33835] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 13:58:19,443 DEBUG [M:0;jenkins-hbase17:33835] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 13:58:19,443 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/323d7651-059d-3097-e2fb-f3c370c2c91e/MasterData/WALs/jenkins-hbase17.apache.org,33835,1685541498076 2023-05-31 13:58:19,446 INFO [M:0;jenkins-hbase17:33835] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-31 13:58:19,446 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 13:58:19,446 INFO [M:0;jenkins-hbase17:33835] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:33835 2023-05-31 13:58:19,448 DEBUG [M:0;jenkins-hbase17:33835] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,33835,1685541498076 already deleted, retry=false 2023-05-31 13:58:19,596 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:58:19,596 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): master:33835-0x1008187a7e50000, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:58:19,596 INFO [M:0;jenkins-hbase17:33835] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,33835,1685541498076; zookeeper connection closed. 2023-05-31 13:58:19,696 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): regionserver:38657-0x1008187a7e50001, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:58:19,696 INFO [RS:0;jenkins-hbase17:38657] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,38657,1685541498115; zookeeper connection closed. 2023-05-31 13:58:19,696 DEBUG [Listener at localhost.localdomain/46795-EventThread] zookeeper.ZKWatcher(600): regionserver:38657-0x1008187a7e50001, quorum=127.0.0.1:61292, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 13:58:19,697 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@28802446] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@28802446 2023-05-31 13:58:19,697 INFO [Listener at localhost.localdomain/46795] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-31 13:58:19,698 WARN [Listener at localhost.localdomain/46795] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 13:58:19,706 INFO [Listener at localhost.localdomain/46795] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 13:58:19,809 WARN [BP-1551951394-136.243.18.41-1685541497611 heartbeating to localhost.localdomain/127.0.0.1:36575] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 13:58:19,810 WARN [BP-1551951394-136.243.18.41-1685541497611 heartbeating to localhost.localdomain/127.0.0.1:36575] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1551951394-136.243.18.41-1685541497611 (Datanode Uuid 14315c09-26a9-424a-80ff-83ad73033f6d) service to localhost.localdomain/127.0.0.1:36575 2023-05-31 13:58:19,810 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/cluster_83886268-3cab-2ab6-64ec-4e744159e3f9/dfs/data/data3/current/BP-1551951394-136.243.18.41-1685541497611] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:58:19,810 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/cluster_83886268-3cab-2ab6-64ec-4e744159e3f9/dfs/data/data4/current/BP-1551951394-136.243.18.41-1685541497611] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:58:19,811 WARN [Listener at localhost.localdomain/46795] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 13:58:19,815 INFO [Listener at localhost.localdomain/46795] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 13:58:19,894 WARN [BP-1551951394-136.243.18.41-1685541497611 heartbeating to localhost.localdomain/127.0.0.1:36575] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1551951394-136.243.18.41-1685541497611 (Datanode Uuid 8e4665d5-471c-4b6b-ad88-16fd9a403fe6) service to localhost.localdomain/127.0.0.1:36575 2023-05-31 13:58:19,895 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/cluster_83886268-3cab-2ab6-64ec-4e744159e3f9/dfs/data/data1/current/BP-1551951394-136.243.18.41-1685541497611] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:58:19,896 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1852fe72-21cc-84b0-2d5a-79a09f2c1d71/cluster_83886268-3cab-2ab6-64ec-4e744159e3f9/dfs/data/data2/current/BP-1551951394-136.243.18.41-1685541497611] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 13:58:19,934 INFO [Listener at localhost.localdomain/46795] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-31 13:58:20,043 INFO [Listener at localhost.localdomain/46795] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-31 13:58:20,056 INFO [Listener at localhost.localdomain/46795] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-31 13:58:20,066 INFO [Listener at localhost.localdomain/46795] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=131 (was 107) - Thread LEAK? -, OpenFileDescriptor=554 (was 530) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=143 (was 112) - SystemLoadAverage LEAK? -, ProcessCount=168 (was 168), AvailableMemoryMB=7174 (was 7252)