2023-06-05 17:52:43,136 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb 2023-06-05 17:52:43,150 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.regionserver.wal.TestLogRolling timeout: 13 mins 2023-06-05 17:52:43,180 INFO [Time-limited test] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=10, OpenFileDescriptor=263, MaxFileDescriptor=60000, SystemLoadAverage=327, ProcessCount=170, AvailableMemoryMB=8264 2023-06-05 17:52:43,186 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-05 17:52:43,186 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/cluster_d0f05671-cfcb-ff9b-cd2a-fcb1c31ffa1d, deleteOnExit=true 2023-06-05 17:52:43,186 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-05 17:52:43,193 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/test.cache.data in system properties and HBase conf 2023-06-05 17:52:43,194 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/hadoop.tmp.dir in system properties and HBase conf 2023-06-05 17:52:43,194 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/hadoop.log.dir in system properties and HBase conf 2023-06-05 17:52:43,195 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-05 17:52:43,195 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-05 17:52:43,195 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-05 17:52:43,308 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-06-05 17:52:43,672 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-05 17:52:43,678 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-05 17:52:43,679 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-05 17:52:43,679 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-05 17:52:43,680 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-05 17:52:43,680 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-05 17:52:43,680 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-05 17:52:43,681 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-05 17:52:43,681 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-05 17:52:43,682 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-05 17:52:43,682 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/nfs.dump.dir in system properties and HBase conf 2023-06-05 17:52:43,683 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/java.io.tmpdir in system properties and HBase conf 2023-06-05 17:52:43,683 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-05 17:52:43,683 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-05 17:52:43,684 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-05 17:52:44,193 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-05 17:52:44,204 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-05 17:52:44,208 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-05 17:52:44,472 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-06-05 17:52:44,633 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-06-05 17:52:44,651 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:52:44,697 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:52:44,739 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/java.io.tmpdir/Jetty_localhost_localdomain_36281_hdfs____jiplc2/webapp 2023-06-05 17:52:44,903 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:36281 2023-06-05 17:52:44,911 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-05 17:52:44,913 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-05 17:52:44,914 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-05 17:52:45,288 WARN [Listener at localhost.localdomain/41259] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:52:45,354 WARN [Listener at localhost.localdomain/41259] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-05 17:52:45,371 WARN [Listener at localhost.localdomain/41259] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:52:45,379 INFO [Listener at localhost.localdomain/41259] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:52:45,383 INFO [Listener at localhost.localdomain/41259] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/java.io.tmpdir/Jetty_localhost_38559_datanode____xfdvy3/webapp 2023-06-05 17:52:45,466 INFO [Listener at localhost.localdomain/41259] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38559 2023-06-05 17:52:45,733 WARN [Listener at localhost.localdomain/38345] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:52:45,741 WARN [Listener at localhost.localdomain/38345] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-05 17:52:45,746 WARN [Listener at localhost.localdomain/38345] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:52:45,748 INFO [Listener at localhost.localdomain/38345] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:52:45,753 INFO [Listener at localhost.localdomain/38345] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/java.io.tmpdir/Jetty_localhost_45219_datanode____.v8t6n4/webapp 2023-06-05 17:52:45,830 INFO [Listener at localhost.localdomain/38345] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45219 2023-06-05 17:52:45,844 WARN [Listener at localhost.localdomain/44643] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:52:46,123 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbb37b9084d4d9c83: Processing first storage report for DS-eca73c4f-cc00-4f04-ab77-d977847d74f6 from datanode 4dcfca39-5b55-45fb-aab3-94c03fb76fcc 2023-06-05 17:52:46,124 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbb37b9084d4d9c83: from storage DS-eca73c4f-cc00-4f04-ab77-d977847d74f6 node DatanodeRegistration(127.0.0.1:41157, datanodeUuid=4dcfca39-5b55-45fb-aab3-94c03fb76fcc, infoPort=45409, infoSecurePort=0, ipcPort=38345, storageInfo=lv=-57;cid=testClusterID;nsid=1498143000;c=1685987564273), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-05 17:52:46,125 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7d5ee7cb4744750b: Processing first storage report for DS-b5fce8d7-6950-4ed9-8038-80e4122850a3 from datanode 171a7647-648b-42fa-be2e-49b7e3a0a049 2023-06-05 17:52:46,125 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7d5ee7cb4744750b: from storage DS-b5fce8d7-6950-4ed9-8038-80e4122850a3 node DatanodeRegistration(127.0.0.1:32987, datanodeUuid=171a7647-648b-42fa-be2e-49b7e3a0a049, infoPort=37107, infoSecurePort=0, ipcPort=44643, storageInfo=lv=-57;cid=testClusterID;nsid=1498143000;c=1685987564273), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:52:46,125 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbb37b9084d4d9c83: Processing first storage report for DS-42b9c0bd-4fb3-417e-8ce4-481c14c96beb from datanode 4dcfca39-5b55-45fb-aab3-94c03fb76fcc 2023-06-05 17:52:46,125 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbb37b9084d4d9c83: from storage DS-42b9c0bd-4fb3-417e-8ce4-481c14c96beb node DatanodeRegistration(127.0.0.1:41157, datanodeUuid=4dcfca39-5b55-45fb-aab3-94c03fb76fcc, infoPort=45409, infoSecurePort=0, ipcPort=38345, storageInfo=lv=-57;cid=testClusterID;nsid=1498143000;c=1685987564273), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:52:46,125 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7d5ee7cb4744750b: Processing first storage report for DS-af5486e4-58b3-427f-aaff-342b0aa66bfe from datanode 171a7647-648b-42fa-be2e-49b7e3a0a049 2023-06-05 17:52:46,125 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7d5ee7cb4744750b: from storage DS-af5486e4-58b3-427f-aaff-342b0aa66bfe node DatanodeRegistration(127.0.0.1:32987, datanodeUuid=171a7647-648b-42fa-be2e-49b7e3a0a049, infoPort=37107, infoSecurePort=0, ipcPort=44643, storageInfo=lv=-57;cid=testClusterID;nsid=1498143000;c=1685987564273), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:52:46,178 DEBUG [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb 2023-06-05 17:52:46,247 INFO [Listener at localhost.localdomain/44643] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/cluster_d0f05671-cfcb-ff9b-cd2a-fcb1c31ffa1d/zookeeper_0, clientPort=53414, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/cluster_d0f05671-cfcb-ff9b-cd2a-fcb1c31ffa1d/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/cluster_d0f05671-cfcb-ff9b-cd2a-fcb1c31ffa1d/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-05 17:52:46,260 INFO [Listener at localhost.localdomain/44643] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=53414 2023-06-05 17:52:46,271 INFO [Listener at localhost.localdomain/44643] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:52:46,274 INFO [Listener at localhost.localdomain/44643] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:52:46,913 INFO [Listener at localhost.localdomain/44643] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6 with version=8 2023-06-05 17:52:46,913 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/hbase-staging 2023-06-05 17:52:47,268 INFO [Listener at localhost.localdomain/44643] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-06-05 17:52:47,640 INFO [Listener at localhost.localdomain/44643] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-06-05 17:52:47,665 INFO [Listener at localhost.localdomain/44643] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:52:47,665 INFO [Listener at localhost.localdomain/44643] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-05 17:52:47,666 INFO [Listener at localhost.localdomain/44643] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-05 17:52:47,666 INFO [Listener at localhost.localdomain/44643] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:52:47,666 INFO [Listener at localhost.localdomain/44643] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-05 17:52:47,779 INFO [Listener at localhost.localdomain/44643] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-05 17:52:47,837 DEBUG [Listener at localhost.localdomain/44643] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-06-05 17:52:47,913 INFO [Listener at localhost.localdomain/44643] ipc.NettyRpcServer(120): Bind to /148.251.75.209:39011 2023-06-05 17:52:47,921 INFO [Listener at localhost.localdomain/44643] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:52:47,923 INFO [Listener at localhost.localdomain/44643] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:52:47,943 INFO [Listener at localhost.localdomain/44643] zookeeper.RecoverableZooKeeper(93): Process identifier=master:39011 connecting to ZooKeeper ensemble=127.0.0.1:53414 2023-06-05 17:52:47,973 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:390110x0, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-05 17:52:47,975 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:39011-0x101bc66babe0000 connected 2023-06-05 17:52:48,059 DEBUG [Listener at localhost.localdomain/44643] zookeeper.ZKUtil(164): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-05 17:52:48,061 DEBUG [Listener at localhost.localdomain/44643] zookeeper.ZKUtil(164): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:52:48,067 DEBUG [Listener at localhost.localdomain/44643] zookeeper.ZKUtil(164): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-05 17:52:48,074 DEBUG [Listener at localhost.localdomain/44643] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39011 2023-06-05 17:52:48,075 DEBUG [Listener at localhost.localdomain/44643] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39011 2023-06-05 17:52:48,076 DEBUG [Listener at localhost.localdomain/44643] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39011 2023-06-05 17:52:48,076 DEBUG [Listener at localhost.localdomain/44643] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39011 2023-06-05 17:52:48,077 DEBUG [Listener at localhost.localdomain/44643] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39011 2023-06-05 17:52:48,082 INFO [Listener at localhost.localdomain/44643] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6, hbase.cluster.distributed=false 2023-06-05 17:52:48,140 INFO [Listener at localhost.localdomain/44643] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-06-05 17:52:48,141 INFO [Listener at localhost.localdomain/44643] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:52:48,141 INFO [Listener at localhost.localdomain/44643] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-05 17:52:48,141 INFO [Listener at localhost.localdomain/44643] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-05 17:52:48,141 INFO [Listener at localhost.localdomain/44643] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:52:48,141 INFO [Listener at localhost.localdomain/44643] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-05 17:52:48,145 INFO [Listener at localhost.localdomain/44643] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-05 17:52:48,148 INFO [Listener at localhost.localdomain/44643] ipc.NettyRpcServer(120): Bind to /148.251.75.209:33549 2023-06-05 17:52:48,150 INFO [Listener at localhost.localdomain/44643] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-05 17:52:48,155 DEBUG [Listener at localhost.localdomain/44643] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-05 17:52:48,157 INFO [Listener at localhost.localdomain/44643] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:52:48,159 INFO [Listener at localhost.localdomain/44643] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:52:48,161 INFO [Listener at localhost.localdomain/44643] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33549 connecting to ZooKeeper ensemble=127.0.0.1:53414 2023-06-05 17:52:48,164 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): regionserver:335490x0, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-05 17:52:48,165 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33549-0x101bc66babe0001 connected 2023-06-05 17:52:48,165 DEBUG [Listener at localhost.localdomain/44643] zookeeper.ZKUtil(164): regionserver:33549-0x101bc66babe0001, quorum=127.0.0.1:53414, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-05 17:52:48,166 DEBUG [Listener at localhost.localdomain/44643] zookeeper.ZKUtil(164): regionserver:33549-0x101bc66babe0001, quorum=127.0.0.1:53414, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:52:48,167 DEBUG [Listener at localhost.localdomain/44643] zookeeper.ZKUtil(164): regionserver:33549-0x101bc66babe0001, quorum=127.0.0.1:53414, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-05 17:52:48,168 DEBUG [Listener at localhost.localdomain/44643] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33549 2023-06-05 17:52:48,168 DEBUG [Listener at localhost.localdomain/44643] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33549 2023-06-05 17:52:48,169 DEBUG [Listener at localhost.localdomain/44643] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33549 2023-06-05 17:52:48,169 DEBUG [Listener at localhost.localdomain/44643] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33549 2023-06-05 17:52:48,169 DEBUG [Listener at localhost.localdomain/44643] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33549 2023-06-05 17:52:48,171 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,39011,1685987567025 2023-06-05 17:52:48,180 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-05 17:52:48,182 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,39011,1685987567025 2023-06-05 17:52:48,201 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-05 17:52:48,201 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): regionserver:33549-0x101bc66babe0001, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-05 17:52:48,201 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:52:48,202 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-05 17:52:48,203 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,39011,1685987567025 from backup master directory 2023-06-05 17:52:48,204 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-05 17:52:48,206 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,39011,1685987567025 2023-06-05 17:52:48,206 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-05 17:52:48,206 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-05 17:52:48,207 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,39011,1685987567025 2023-06-05 17:52:48,209 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-06-05 17:52:48,210 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-06-05 17:52:48,294 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/hbase.id with ID: 5c49762e-f07c-4e68-8129-54686ce4be91 2023-06-05 17:52:48,343 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:52:48,358 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:52:48,399 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x5eba0700 to 127.0.0.1:53414 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-05 17:52:48,427 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@49293cf5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-05 17:52:48,446 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-05 17:52:48,448 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-05 17:52:48,455 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-05 17:52:48,482 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/MasterData/data/master/store-tmp 2023-06-05 17:52:48,511 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:52:48,511 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-05 17:52:48,511 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:52:48,511 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:52:48,512 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-05 17:52:48,512 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:52:48,512 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:52:48,512 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-05 17:52:48,513 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/MasterData/WALs/jenkins-hbase20.apache.org,39011,1685987567025 2023-06-05 17:52:48,531 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C39011%2C1685987567025, suffix=, logDir=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/MasterData/WALs/jenkins-hbase20.apache.org,39011,1685987567025, archiveDir=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/MasterData/oldWALs, maxLogs=10 2023-06-05 17:52:48,547 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.CommonFSUtils$DfsBuilderUtility(753): Could not find replicate method on builder; will not set replicate when creating output stream java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DistributedFileSystem$HdfsDataOutputStreamBuilder.replicate() at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.util.CommonFSUtils$DfsBuilderUtility.(CommonFSUtils.java:750) at org.apache.hadoop.hbase.util.CommonFSUtils.createForWal(CommonFSUtils.java:802) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.initOutput(ProtobufLogWriter.java:102) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:160) at org.apache.hadoop.hbase.wal.FSHLogProvider.createWriter(FSHLogProvider.java:78) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:307) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:881) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:574) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:515) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62) at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:295) at org.apache.hadoop.hbase.master.region.MasterRegion.createWAL(MasterRegion.java:200) at org.apache.hadoop.hbase.master.region.MasterRegion.bootstrap(MasterRegion.java:220) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:348) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:52:48,568 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/MasterData/WALs/jenkins-hbase20.apache.org,39011,1685987567025/jenkins-hbase20.apache.org%2C39011%2C1685987567025.1685987568545 2023-06-05 17:52:48,568 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41157,DS-eca73c4f-cc00-4f04-ab77-d977847d74f6,DISK], DatanodeInfoWithStorage[127.0.0.1:32987,DS-b5fce8d7-6950-4ed9-8038-80e4122850a3,DISK]] 2023-06-05 17:52:48,568 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-05 17:52:48,568 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:52:48,571 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:52:48,572 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:52:48,625 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:52:48,632 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-05 17:52:48,650 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-05 17:52:48,663 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:52:48,668 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:52:48,669 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:52:48,683 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:52:48,687 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-05 17:52:48,689 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=814867, jitterRate=0.03615756332874298}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-05 17:52:48,689 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-05 17:52:48,691 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-05 17:52:48,707 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-05 17:52:48,707 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-05 17:52:48,709 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-05 17:52:48,711 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-06-05 17:52:48,739 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 27 msec 2023-06-05 17:52:48,739 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-05 17:52:48,760 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-05 17:52:48,765 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-05 17:52:48,787 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-05 17:52:48,790 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-05 17:52:48,792 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-05 17:52:48,796 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-05 17:52:48,800 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-05 17:52:48,802 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:52:48,803 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-05 17:52:48,804 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-05 17:52:48,814 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-05 17:52:48,817 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-05 17:52:48,817 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): regionserver:33549-0x101bc66babe0001, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-05 17:52:48,817 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:52:48,818 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,39011,1685987567025, sessionid=0x101bc66babe0000, setting cluster-up flag (Was=false) 2023-06-05 17:52:48,830 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:52:48,834 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-05 17:52:48,835 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,39011,1685987567025 2023-06-05 17:52:48,838 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:52:48,841 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-05 17:52:48,842 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,39011,1685987567025 2023-06-05 17:52:48,845 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/.hbase-snapshot/.tmp 2023-06-05 17:52:48,873 INFO [RS:0;jenkins-hbase20:33549] regionserver.HRegionServer(951): ClusterId : 5c49762e-f07c-4e68-8129-54686ce4be91 2023-06-05 17:52:48,877 DEBUG [RS:0;jenkins-hbase20:33549] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-05 17:52:48,881 DEBUG [RS:0;jenkins-hbase20:33549] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-05 17:52:48,881 DEBUG [RS:0;jenkins-hbase20:33549] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-05 17:52:48,884 DEBUG [RS:0;jenkins-hbase20:33549] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-05 17:52:48,885 DEBUG [RS:0;jenkins-hbase20:33549] zookeeper.ReadOnlyZKClient(139): Connect 0x39b8bcc9 to 127.0.0.1:53414 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-05 17:52:48,889 DEBUG [RS:0;jenkins-hbase20:33549] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@377f558e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-05 17:52:48,890 DEBUG [RS:0;jenkins-hbase20:33549] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3340d7b9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-05 17:52:48,913 DEBUG [RS:0;jenkins-hbase20:33549] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:33549 2023-06-05 17:52:48,916 INFO [RS:0;jenkins-hbase20:33549] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-05 17:52:48,917 INFO [RS:0;jenkins-hbase20:33549] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-05 17:52:48,917 DEBUG [RS:0;jenkins-hbase20:33549] regionserver.HRegionServer(1022): About to register with Master. 2023-06-05 17:52:48,919 INFO [RS:0;jenkins-hbase20:33549] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,39011,1685987567025 with isa=jenkins-hbase20.apache.org/148.251.75.209:33549, startcode=1685987568140 2023-06-05 17:52:48,935 DEBUG [RS:0;jenkins-hbase20:33549] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-05 17:52:48,943 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-05 17:52:48,953 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-05 17:52:48,953 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-05 17:52:48,953 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-05 17:52:48,953 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-05 17:52:48,953 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-06-05 17:52:48,953 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:52:48,953 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-05 17:52:48,953 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:52:48,954 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685987598954 2023-06-05 17:52:48,956 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-05 17:52:48,960 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-05 17:52:48,961 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-05 17:52:48,966 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-05 17:52:48,967 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-05 17:52:48,976 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-05 17:52:48,977 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-05 17:52:48,977 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-05 17:52:48,977 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-05 17:52:48,978 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-05 17:52:48,983 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-05 17:52:48,984 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-05 17:52:48,984 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-05 17:52:48,986 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-05 17:52:48,987 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-05 17:52:48,989 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685987568989,5,FailOnTimeoutGroup] 2023-06-05 17:52:48,991 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685987568991,5,FailOnTimeoutGroup] 2023-06-05 17:52:48,991 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-05 17:52:48,992 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-05 17:52:48,993 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-05 17:52:48,993 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-05 17:52:49,013 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-05 17:52:49,015 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-05 17:52:49,015 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6 2023-06-05 17:52:49,040 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:52:49,044 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-05 17:52:49,047 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/1588230740/info 2023-06-05 17:52:49,047 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-05 17:52:49,049 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:52:49,049 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-05 17:52:49,052 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/1588230740/rep_barrier 2023-06-05 17:52:49,053 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-05 17:52:49,054 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:52:49,054 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-05 17:52:49,056 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/1588230740/table 2023-06-05 17:52:49,057 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-05 17:52:49,058 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:52:49,060 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/1588230740 2023-06-05 17:52:49,061 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/1588230740 2023-06-05 17:52:49,066 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-05 17:52:49,068 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-05 17:52:49,072 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-05 17:52:49,073 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=831336, jitterRate=0.05709950625896454}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-05 17:52:49,073 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-05 17:52:49,073 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-05 17:52:49,074 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-05 17:52:49,074 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-05 17:52:49,074 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-05 17:52:49,074 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:36985, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-06-05 17:52:49,074 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-05 17:52:49,075 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-05 17:52:49,075 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-05 17:52:49,080 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-05 17:52:49,081 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-05 17:52:49,085 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39011] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,33549,1685987568140 2023-06-05 17:52:49,090 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-05 17:52:49,099 DEBUG [RS:0;jenkins-hbase20:33549] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6 2023-06-05 17:52:49,099 DEBUG [RS:0;jenkins-hbase20:33549] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:41259 2023-06-05 17:52:49,100 DEBUG [RS:0;jenkins-hbase20:33549] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-05 17:52:49,102 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-05 17:52:49,104 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-05 17:52:49,105 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-05 17:52:49,106 DEBUG [RS:0;jenkins-hbase20:33549] zookeeper.ZKUtil(162): regionserver:33549-0x101bc66babe0001, quorum=127.0.0.1:53414, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,33549,1685987568140 2023-06-05 17:52:49,106 WARN [RS:0;jenkins-hbase20:33549] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-05 17:52:49,107 INFO [RS:0;jenkins-hbase20:33549] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-05 17:52:49,107 DEBUG [RS:0;jenkins-hbase20:33549] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/WALs/jenkins-hbase20.apache.org,33549,1685987568140 2023-06-05 17:52:49,109 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,33549,1685987568140] 2023-06-05 17:52:49,116 DEBUG [RS:0;jenkins-hbase20:33549] zookeeper.ZKUtil(162): regionserver:33549-0x101bc66babe0001, quorum=127.0.0.1:53414, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,33549,1685987568140 2023-06-05 17:52:49,126 DEBUG [RS:0;jenkins-hbase20:33549] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-05 17:52:49,134 INFO [RS:0;jenkins-hbase20:33549] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-05 17:52:49,150 INFO [RS:0;jenkins-hbase20:33549] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-05 17:52:49,153 INFO [RS:0;jenkins-hbase20:33549] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-05 17:52:49,153 INFO [RS:0;jenkins-hbase20:33549] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-05 17:52:49,154 INFO [RS:0;jenkins-hbase20:33549] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-05 17:52:49,160 INFO [RS:0;jenkins-hbase20:33549] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-05 17:52:49,160 DEBUG [RS:0;jenkins-hbase20:33549] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:52:49,161 DEBUG [RS:0;jenkins-hbase20:33549] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:52:49,161 DEBUG [RS:0;jenkins-hbase20:33549] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:52:49,161 DEBUG [RS:0;jenkins-hbase20:33549] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:52:49,161 DEBUG [RS:0;jenkins-hbase20:33549] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:52:49,161 DEBUG [RS:0;jenkins-hbase20:33549] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-05 17:52:49,161 DEBUG [RS:0;jenkins-hbase20:33549] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:52:49,161 DEBUG [RS:0;jenkins-hbase20:33549] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:52:49,161 DEBUG [RS:0;jenkins-hbase20:33549] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:52:49,162 DEBUG [RS:0;jenkins-hbase20:33549] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:52:49,163 INFO [RS:0;jenkins-hbase20:33549] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-05 17:52:49,163 INFO [RS:0;jenkins-hbase20:33549] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-05 17:52:49,163 INFO [RS:0;jenkins-hbase20:33549] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-05 17:52:49,176 INFO [RS:0;jenkins-hbase20:33549] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-05 17:52:49,178 INFO [RS:0;jenkins-hbase20:33549] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33549,1685987568140-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-05 17:52:49,190 INFO [RS:0;jenkins-hbase20:33549] regionserver.Replication(203): jenkins-hbase20.apache.org,33549,1685987568140 started 2023-06-05 17:52:49,190 INFO [RS:0;jenkins-hbase20:33549] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,33549,1685987568140, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:33549, sessionid=0x101bc66babe0001 2023-06-05 17:52:49,191 DEBUG [RS:0;jenkins-hbase20:33549] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-05 17:52:49,191 DEBUG [RS:0;jenkins-hbase20:33549] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,33549,1685987568140 2023-06-05 17:52:49,191 DEBUG [RS:0;jenkins-hbase20:33549] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,33549,1685987568140' 2023-06-05 17:52:49,191 DEBUG [RS:0;jenkins-hbase20:33549] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-05 17:52:49,192 DEBUG [RS:0;jenkins-hbase20:33549] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-05 17:52:49,192 DEBUG [RS:0;jenkins-hbase20:33549] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-05 17:52:49,192 DEBUG [RS:0;jenkins-hbase20:33549] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-05 17:52:49,192 DEBUG [RS:0;jenkins-hbase20:33549] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,33549,1685987568140 2023-06-05 17:52:49,192 DEBUG [RS:0;jenkins-hbase20:33549] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,33549,1685987568140' 2023-06-05 17:52:49,192 DEBUG [RS:0;jenkins-hbase20:33549] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-05 17:52:49,193 DEBUG [RS:0;jenkins-hbase20:33549] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-05 17:52:49,193 DEBUG [RS:0;jenkins-hbase20:33549] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-05 17:52:49,194 INFO [RS:0;jenkins-hbase20:33549] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-05 17:52:49,194 INFO [RS:0;jenkins-hbase20:33549] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-05 17:52:49,257 DEBUG [jenkins-hbase20:39011] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-05 17:52:49,261 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,33549,1685987568140, state=OPENING 2023-06-05 17:52:49,270 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-05 17:52:49,271 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:52:49,272 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-05 17:52:49,275 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,33549,1685987568140}] 2023-06-05 17:52:49,303 INFO [RS:0;jenkins-hbase20:33549] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C33549%2C1685987568140, suffix=, logDir=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/WALs/jenkins-hbase20.apache.org,33549,1685987568140, archiveDir=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/oldWALs, maxLogs=32 2023-06-05 17:52:49,318 INFO [RS:0;jenkins-hbase20:33549] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/WALs/jenkins-hbase20.apache.org,33549,1685987568140/jenkins-hbase20.apache.org%2C33549%2C1685987568140.1685987569307 2023-06-05 17:52:49,319 DEBUG [RS:0;jenkins-hbase20:33549] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41157,DS-eca73c4f-cc00-4f04-ab77-d977847d74f6,DISK], DatanodeInfoWithStorage[127.0.0.1:32987,DS-b5fce8d7-6950-4ed9-8038-80e4122850a3,DISK]] 2023-06-05 17:52:49,458 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,33549,1685987568140 2023-06-05 17:52:49,460 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-05 17:52:49,464 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:56276, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-05 17:52:49,478 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-05 17:52:49,479 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-05 17:52:49,483 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C33549%2C1685987568140.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/WALs/jenkins-hbase20.apache.org,33549,1685987568140, archiveDir=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/oldWALs, maxLogs=32 2023-06-05 17:52:49,501 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/WALs/jenkins-hbase20.apache.org,33549,1685987568140/jenkins-hbase20.apache.org%2C33549%2C1685987568140.meta.1685987569486.meta 2023-06-05 17:52:49,501 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41157,DS-eca73c4f-cc00-4f04-ab77-d977847d74f6,DISK], DatanodeInfoWithStorage[127.0.0.1:32987,DS-b5fce8d7-6950-4ed9-8038-80e4122850a3,DISK]] 2023-06-05 17:52:49,502 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-05 17:52:49,504 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-05 17:52:49,520 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-05 17:52:49,524 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-05 17:52:49,528 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-05 17:52:49,529 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:52:49,529 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-05 17:52:49,529 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-05 17:52:49,531 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-05 17:52:49,534 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/1588230740/info 2023-06-05 17:52:49,534 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/1588230740/info 2023-06-05 17:52:49,534 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-05 17:52:49,535 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:52:49,535 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-05 17:52:49,536 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/1588230740/rep_barrier 2023-06-05 17:52:49,537 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/1588230740/rep_barrier 2023-06-05 17:52:49,537 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-05 17:52:49,538 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:52:49,538 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-05 17:52:49,539 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/1588230740/table 2023-06-05 17:52:49,539 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/1588230740/table 2023-06-05 17:52:49,540 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-05 17:52:49,541 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:52:49,543 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/1588230740 2023-06-05 17:52:49,546 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/1588230740 2023-06-05 17:52:49,549 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-05 17:52:49,551 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-05 17:52:49,552 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=690636, jitterRate=-0.12181192636489868}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-05 17:52:49,553 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-05 17:52:49,561 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685987569453 2023-06-05 17:52:49,577 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-05 17:52:49,578 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-05 17:52:49,578 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,33549,1685987568140, state=OPEN 2023-06-05 17:52:49,580 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-05 17:52:49,580 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-05 17:52:49,586 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-05 17:52:49,586 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,33549,1685987568140 in 305 msec 2023-06-05 17:52:49,591 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-05 17:52:49,591 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 498 msec 2023-06-05 17:52:49,597 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 706 msec 2023-06-05 17:52:49,598 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685987569597, completionTime=-1 2023-06-05 17:52:49,598 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-05 17:52:49,598 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-05 17:52:49,654 DEBUG [hconnection-0x1bb06c3-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-05 17:52:49,657 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:56288, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-05 17:52:49,674 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-05 17:52:49,674 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685987629674 2023-06-05 17:52:49,674 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685987689674 2023-06-05 17:52:49,674 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 75 msec 2023-06-05 17:52:49,698 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39011,1685987567025-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-05 17:52:49,699 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39011,1685987567025-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-05 17:52:49,699 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39011,1685987567025-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-05 17:52:49,700 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:39011, period=300000, unit=MILLISECONDS is enabled. 2023-06-05 17:52:49,700 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-05 17:52:49,706 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-05 17:52:49,713 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-05 17:52:49,714 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-05 17:52:49,722 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-05 17:52:49,725 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-05 17:52:49,728 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-05 17:52:49,749 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/.tmp/data/hbase/namespace/541e119b76ba134a029c42a38b54131d 2023-06-05 17:52:49,751 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/.tmp/data/hbase/namespace/541e119b76ba134a029c42a38b54131d empty. 2023-06-05 17:52:49,752 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/.tmp/data/hbase/namespace/541e119b76ba134a029c42a38b54131d 2023-06-05 17:52:49,752 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-05 17:52:49,824 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-05 17:52:49,826 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 541e119b76ba134a029c42a38b54131d, NAME => 'hbase:namespace,,1685987569714.541e119b76ba134a029c42a38b54131d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/.tmp 2023-06-05 17:52:49,845 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685987569714.541e119b76ba134a029c42a38b54131d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:52:49,845 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 541e119b76ba134a029c42a38b54131d, disabling compactions & flushes 2023-06-05 17:52:49,845 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685987569714.541e119b76ba134a029c42a38b54131d. 2023-06-05 17:52:49,846 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685987569714.541e119b76ba134a029c42a38b54131d. 2023-06-05 17:52:49,846 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685987569714.541e119b76ba134a029c42a38b54131d. after waiting 0 ms 2023-06-05 17:52:49,846 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685987569714.541e119b76ba134a029c42a38b54131d. 2023-06-05 17:52:49,846 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685987569714.541e119b76ba134a029c42a38b54131d. 2023-06-05 17:52:49,846 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 541e119b76ba134a029c42a38b54131d: 2023-06-05 17:52:49,851 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-05 17:52:49,868 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685987569714.541e119b76ba134a029c42a38b54131d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685987569854"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685987569854"}]},"ts":"1685987569854"} 2023-06-05 17:52:49,890 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-05 17:52:49,892 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-05 17:52:49,896 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685987569892"}]},"ts":"1685987569892"} 2023-06-05 17:52:49,900 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-05 17:52:49,909 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=541e119b76ba134a029c42a38b54131d, ASSIGN}] 2023-06-05 17:52:49,912 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=541e119b76ba134a029c42a38b54131d, ASSIGN 2023-06-05 17:52:49,914 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=541e119b76ba134a029c42a38b54131d, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,33549,1685987568140; forceNewPlan=false, retain=false 2023-06-05 17:52:50,066 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=541e119b76ba134a029c42a38b54131d, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,33549,1685987568140 2023-06-05 17:52:50,067 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685987569714.541e119b76ba134a029c42a38b54131d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685987570065"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685987570065"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685987570065"}]},"ts":"1685987570065"} 2023-06-05 17:52:50,077 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 541e119b76ba134a029c42a38b54131d, server=jenkins-hbase20.apache.org,33549,1685987568140}] 2023-06-05 17:52:50,242 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685987569714.541e119b76ba134a029c42a38b54131d. 2023-06-05 17:52:50,244 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 541e119b76ba134a029c42a38b54131d, NAME => 'hbase:namespace,,1685987569714.541e119b76ba134a029c42a38b54131d.', STARTKEY => '', ENDKEY => ''} 2023-06-05 17:52:50,245 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 541e119b76ba134a029c42a38b54131d 2023-06-05 17:52:50,246 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685987569714.541e119b76ba134a029c42a38b54131d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:52:50,246 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 541e119b76ba134a029c42a38b54131d 2023-06-05 17:52:50,246 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 541e119b76ba134a029c42a38b54131d 2023-06-05 17:52:50,248 INFO [StoreOpener-541e119b76ba134a029c42a38b54131d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 541e119b76ba134a029c42a38b54131d 2023-06-05 17:52:50,251 DEBUG [StoreOpener-541e119b76ba134a029c42a38b54131d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/namespace/541e119b76ba134a029c42a38b54131d/info 2023-06-05 17:52:50,251 DEBUG [StoreOpener-541e119b76ba134a029c42a38b54131d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/namespace/541e119b76ba134a029c42a38b54131d/info 2023-06-05 17:52:50,252 INFO [StoreOpener-541e119b76ba134a029c42a38b54131d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 541e119b76ba134a029c42a38b54131d columnFamilyName info 2023-06-05 17:52:50,253 INFO [StoreOpener-541e119b76ba134a029c42a38b54131d-1] regionserver.HStore(310): Store=541e119b76ba134a029c42a38b54131d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:52:50,255 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/namespace/541e119b76ba134a029c42a38b54131d 2023-06-05 17:52:50,256 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/namespace/541e119b76ba134a029c42a38b54131d 2023-06-05 17:52:50,262 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 541e119b76ba134a029c42a38b54131d 2023-06-05 17:52:50,268 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/namespace/541e119b76ba134a029c42a38b54131d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-05 17:52:50,269 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 541e119b76ba134a029c42a38b54131d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=852621, jitterRate=0.0841638445854187}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-05 17:52:50,269 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 541e119b76ba134a029c42a38b54131d: 2023-06-05 17:52:50,272 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685987569714.541e119b76ba134a029c42a38b54131d., pid=6, masterSystemTime=1685987570232 2023-06-05 17:52:50,276 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685987569714.541e119b76ba134a029c42a38b54131d. 2023-06-05 17:52:50,276 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685987569714.541e119b76ba134a029c42a38b54131d. 2023-06-05 17:52:50,278 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=541e119b76ba134a029c42a38b54131d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,33549,1685987568140 2023-06-05 17:52:50,279 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685987569714.541e119b76ba134a029c42a38b54131d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685987570277"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685987570277"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685987570277"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685987570277"}]},"ts":"1685987570277"} 2023-06-05 17:52:50,286 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-05 17:52:50,286 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 541e119b76ba134a029c42a38b54131d, server=jenkins-hbase20.apache.org,33549,1685987568140 in 205 msec 2023-06-05 17:52:50,290 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-05 17:52:50,291 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=541e119b76ba134a029c42a38b54131d, ASSIGN in 378 msec 2023-06-05 17:52:50,292 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-05 17:52:50,292 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685987570292"}]},"ts":"1685987570292"} 2023-06-05 17:52:50,295 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-05 17:52:50,299 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-05 17:52:50,302 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 584 msec 2023-06-05 17:52:50,325 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-05 17:52:50,326 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-05 17:52:50,326 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:52:50,365 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-05 17:52:50,383 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-05 17:52:50,388 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 32 msec 2023-06-05 17:52:50,399 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-05 17:52:50,412 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-05 17:52:50,417 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 17 msec 2023-06-05 17:52:50,430 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-05 17:52:50,432 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-05 17:52:50,434 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.227sec 2023-06-05 17:52:50,436 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-05 17:52:50,437 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-05 17:52:50,437 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-05 17:52:50,439 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39011,1685987567025-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-05 17:52:50,439 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39011,1685987567025-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-05 17:52:50,448 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-05 17:52:50,480 DEBUG [Listener at localhost.localdomain/44643] zookeeper.ReadOnlyZKClient(139): Connect 0x5f199977 to 127.0.0.1:53414 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-05 17:52:50,484 DEBUG [Listener at localhost.localdomain/44643] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7f7466cd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-05 17:52:50,497 DEBUG [hconnection-0x26b1719-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-05 17:52:50,507 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:56290, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-05 17:52:50,517 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,39011,1685987567025 2023-06-05 17:52:50,518 INFO [Listener at localhost.localdomain/44643] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:52:50,526 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-05 17:52:50,526 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:52:50,527 INFO [Listener at localhost.localdomain/44643] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-05 17:52:50,534 DEBUG [Listener at localhost.localdomain/44643] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-05 17:52:50,537 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:41454, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-05 17:52:50,545 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39011] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-05 17:52:50,545 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39011] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-05 17:52:50,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39011] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-05 17:52:50,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39011] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling 2023-06-05 17:52:50,553 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-06-05 17:52:50,555 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-05 17:52:50,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39011] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testSlowSyncLogRolling" procId is: 9 2023-06-05 17:52:50,559 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a 2023-06-05 17:52:50,561 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a empty. 2023-06-05 17:52:50,564 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a 2023-06-05 17:52:50,564 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testSlowSyncLogRolling regions 2023-06-05 17:52:50,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39011] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-05 17:52:50,593 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/.tabledesc/.tableinfo.0000000001 2023-06-05 17:52:50,595 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4d3d7ae7a4e21b8280bec4e841e2fb3a, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/.tmp 2023-06-05 17:52:50,610 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:52:50,610 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1604): Closing 4d3d7ae7a4e21b8280bec4e841e2fb3a, disabling compactions & flushes 2023-06-05 17:52:50,610 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a. 2023-06-05 17:52:50,610 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a. 2023-06-05 17:52:50,610 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a. after waiting 0 ms 2023-06-05 17:52:50,610 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a. 2023-06-05 17:52:50,610 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a. 2023-06-05 17:52:50,611 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for 4d3d7ae7a4e21b8280bec4e841e2fb3a: 2023-06-05 17:52:50,615 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-06-05 17:52:50,617 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685987570617"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685987570617"}]},"ts":"1685987570617"} 2023-06-05 17:52:50,621 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-05 17:52:50,622 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-05 17:52:50,622 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685987570622"}]},"ts":"1685987570622"} 2023-06-05 17:52:50,625 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLING in hbase:meta 2023-06-05 17:52:50,628 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=4d3d7ae7a4e21b8280bec4e841e2fb3a, ASSIGN}] 2023-06-05 17:52:50,630 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=4d3d7ae7a4e21b8280bec4e841e2fb3a, ASSIGN 2023-06-05 17:52:50,631 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=4d3d7ae7a4e21b8280bec4e841e2fb3a, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,33549,1685987568140; forceNewPlan=false, retain=false 2023-06-05 17:52:50,783 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=4d3d7ae7a4e21b8280bec4e841e2fb3a, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,33549,1685987568140 2023-06-05 17:52:50,783 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685987570783"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685987570783"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685987570783"}]},"ts":"1685987570783"} 2023-06-05 17:52:50,789 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 4d3d7ae7a4e21b8280bec4e841e2fb3a, server=jenkins-hbase20.apache.org,33549,1685987568140}] 2023-06-05 17:52:50,957 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a. 2023-06-05 17:52:50,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4d3d7ae7a4e21b8280bec4e841e2fb3a, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a.', STARTKEY => '', ENDKEY => ''} 2023-06-05 17:52:50,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testSlowSyncLogRolling 4d3d7ae7a4e21b8280bec4e841e2fb3a 2023-06-05 17:52:50,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:52:50,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 4d3d7ae7a4e21b8280bec4e841e2fb3a 2023-06-05 17:52:50,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 4d3d7ae7a4e21b8280bec4e841e2fb3a 2023-06-05 17:52:50,960 INFO [StoreOpener-4d3d7ae7a4e21b8280bec4e841e2fb3a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 4d3d7ae7a4e21b8280bec4e841e2fb3a 2023-06-05 17:52:50,962 DEBUG [StoreOpener-4d3d7ae7a4e21b8280bec4e841e2fb3a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info 2023-06-05 17:52:50,963 DEBUG [StoreOpener-4d3d7ae7a4e21b8280bec4e841e2fb3a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info 2023-06-05 17:52:50,963 INFO [StoreOpener-4d3d7ae7a4e21b8280bec4e841e2fb3a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4d3d7ae7a4e21b8280bec4e841e2fb3a columnFamilyName info 2023-06-05 17:52:50,964 INFO [StoreOpener-4d3d7ae7a4e21b8280bec4e841e2fb3a-1] regionserver.HStore(310): Store=4d3d7ae7a4e21b8280bec4e841e2fb3a/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:52:50,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a 2023-06-05 17:52:50,968 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a 2023-06-05 17:52:50,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 4d3d7ae7a4e21b8280bec4e841e2fb3a 2023-06-05 17:52:50,975 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-05 17:52:50,976 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 4d3d7ae7a4e21b8280bec4e841e2fb3a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=789141, jitterRate=0.003445923328399658}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-05 17:52:50,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 4d3d7ae7a4e21b8280bec4e841e2fb3a: 2023-06-05 17:52:50,977 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a., pid=11, masterSystemTime=1685987570945 2023-06-05 17:52:50,983 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a. 2023-06-05 17:52:50,983 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a. 2023-06-05 17:52:50,984 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=4d3d7ae7a4e21b8280bec4e841e2fb3a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,33549,1685987568140 2023-06-05 17:52:50,984 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685987570984"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685987570984"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685987570984"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685987570984"}]},"ts":"1685987570984"} 2023-06-05 17:52:50,990 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-05 17:52:50,990 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 4d3d7ae7a4e21b8280bec4e841e2fb3a, server=jenkins-hbase20.apache.org,33549,1685987568140 in 198 msec 2023-06-05 17:52:50,994 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-05 17:52:50,994 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=4d3d7ae7a4e21b8280bec4e841e2fb3a, ASSIGN in 362 msec 2023-06-05 17:52:50,995 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-05 17:52:50,996 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685987570995"}]},"ts":"1685987570995"} 2023-06-05 17:52:50,998 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLED in hbase:meta 2023-06-05 17:52:51,001 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-06-05 17:52:51,003 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling in 452 msec 2023-06-05 17:52:55,050 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-06-05 17:52:55,132 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-05 17:52:55,134 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-05 17:52:55,135 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testSlowSyncLogRolling' 2023-06-05 17:52:57,265 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-05 17:52:57,265 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-06-05 17:53:00,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39011] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-05 17:53:00,587 INFO [Listener at localhost.localdomain/44643] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testSlowSyncLogRolling, procId: 9 completed 2023-06-05 17:53:00,592 DEBUG [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testSlowSyncLogRolling 2023-06-05 17:53:00,593 DEBUG [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a. 2023-06-05 17:53:12,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33549] regionserver.HRegion(9158): Flush requested on 4d3d7ae7a4e21b8280bec4e841e2fb3a 2023-06-05 17:53:12,652 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4d3d7ae7a4e21b8280bec4e841e2fb3a 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-05 17:53:12,719 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/.tmp/info/929942e0cb34479587d470a0cbb40b78 2023-06-05 17:53:12,786 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/.tmp/info/929942e0cb34479587d470a0cbb40b78 as hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/929942e0cb34479587d470a0cbb40b78 2023-06-05 17:53:12,796 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/929942e0cb34479587d470a0cbb40b78, entries=7, sequenceid=11, filesize=12.1 K 2023-06-05 17:53:12,799 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 4d3d7ae7a4e21b8280bec4e841e2fb3a in 147ms, sequenceid=11, compaction requested=false 2023-06-05 17:53:12,800 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4d3d7ae7a4e21b8280bec4e841e2fb3a: 2023-06-05 17:53:20,876 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 203 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41157,DS-eca73c4f-cc00-4f04-ab77-d977847d74f6,DISK], DatanodeInfoWithStorage[127.0.0.1:32987,DS-b5fce8d7-6950-4ed9-8038-80e4122850a3,DISK]] 2023-06-05 17:53:23,084 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41157,DS-eca73c4f-cc00-4f04-ab77-d977847d74f6,DISK], DatanodeInfoWithStorage[127.0.0.1:32987,DS-b5fce8d7-6950-4ed9-8038-80e4122850a3,DISK]] 2023-06-05 17:53:25,291 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41157,DS-eca73c4f-cc00-4f04-ab77-d977847d74f6,DISK], DatanodeInfoWithStorage[127.0.0.1:32987,DS-b5fce8d7-6950-4ed9-8038-80e4122850a3,DISK]] 2023-06-05 17:53:27,497 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41157,DS-eca73c4f-cc00-4f04-ab77-d977847d74f6,DISK], DatanodeInfoWithStorage[127.0.0.1:32987,DS-b5fce8d7-6950-4ed9-8038-80e4122850a3,DISK]] 2023-06-05 17:53:27,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33549] regionserver.HRegion(9158): Flush requested on 4d3d7ae7a4e21b8280bec4e841e2fb3a 2023-06-05 17:53:27,497 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4d3d7ae7a4e21b8280bec4e841e2fb3a 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-05 17:53:27,700 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41157,DS-eca73c4f-cc00-4f04-ab77-d977847d74f6,DISK], DatanodeInfoWithStorage[127.0.0.1:32987,DS-b5fce8d7-6950-4ed9-8038-80e4122850a3,DISK]] 2023-06-05 17:53:27,726 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=21 (bloomFilter=true), to=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/.tmp/info/8ddf8a19885241fdb15125bc6fe52756 2023-06-05 17:53:27,738 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/.tmp/info/8ddf8a19885241fdb15125bc6fe52756 as hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/8ddf8a19885241fdb15125bc6fe52756 2023-06-05 17:53:27,746 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/8ddf8a19885241fdb15125bc6fe52756, entries=7, sequenceid=21, filesize=12.1 K 2023-06-05 17:53:27,949 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41157,DS-eca73c4f-cc00-4f04-ab77-d977847d74f6,DISK], DatanodeInfoWithStorage[127.0.0.1:32987,DS-b5fce8d7-6950-4ed9-8038-80e4122850a3,DISK]] 2023-06-05 17:53:27,951 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 4d3d7ae7a4e21b8280bec4e841e2fb3a in 452ms, sequenceid=21, compaction requested=false 2023-06-05 17:53:27,951 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4d3d7ae7a4e21b8280bec4e841e2fb3a: 2023-06-05 17:53:27,951 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=24.2 K, sizeToCheck=16.0 K 2023-06-05 17:53:27,952 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-05 17:53:27,955 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/929942e0cb34479587d470a0cbb40b78 because midkey is the same as first or last row 2023-06-05 17:53:29,703 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41157,DS-eca73c4f-cc00-4f04-ab77-d977847d74f6,DISK], DatanodeInfoWithStorage[127.0.0.1:32987,DS-b5fce8d7-6950-4ed9-8038-80e4122850a3,DISK]] 2023-06-05 17:53:31,907 WARN [sync.4] wal.AbstractFSWAL(1302): Requesting log roll because we exceeded slow sync threshold; count=7, threshold=5, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41157,DS-eca73c4f-cc00-4f04-ab77-d977847d74f6,DISK], DatanodeInfoWithStorage[127.0.0.1:32987,DS-b5fce8d7-6950-4ed9-8038-80e4122850a3,DISK]] 2023-06-05 17:53:31,908 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C33549%2C1685987568140:(num 1685987569307) roll requested 2023-06-05 17:53:31,908 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41157,DS-eca73c4f-cc00-4f04-ab77-d977847d74f6,DISK], DatanodeInfoWithStorage[127.0.0.1:32987,DS-b5fce8d7-6950-4ed9-8038-80e4122850a3,DISK]] 2023-06-05 17:53:32,122 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41157,DS-eca73c4f-cc00-4f04-ab77-d977847d74f6,DISK], DatanodeInfoWithStorage[127.0.0.1:32987,DS-b5fce8d7-6950-4ed9-8038-80e4122850a3,DISK]] 2023-06-05 17:53:32,123 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/WALs/jenkins-hbase20.apache.org,33549,1685987568140/jenkins-hbase20.apache.org%2C33549%2C1685987568140.1685987569307 with entries=24, filesize=20.43 KB; new WAL /user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/WALs/jenkins-hbase20.apache.org,33549,1685987568140/jenkins-hbase20.apache.org%2C33549%2C1685987568140.1685987611908 2023-06-05 17:53:32,124 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32987,DS-b5fce8d7-6950-4ed9-8038-80e4122850a3,DISK], DatanodeInfoWithStorage[127.0.0.1:41157,DS-eca73c4f-cc00-4f04-ab77-d977847d74f6,DISK]] 2023-06-05 17:53:32,125 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/WALs/jenkins-hbase20.apache.org,33549,1685987568140/jenkins-hbase20.apache.org%2C33549%2C1685987568140.1685987569307 is not closed yet, will try archiving it next time 2023-06-05 17:53:41,927 INFO [Listener at localhost.localdomain/44643] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-06-05 17:53:46,931 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 5001 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:32987,DS-b5fce8d7-6950-4ed9-8038-80e4122850a3,DISK], DatanodeInfoWithStorage[127.0.0.1:41157,DS-eca73c4f-cc00-4f04-ab77-d977847d74f6,DISK]] 2023-06-05 17:53:46,931 WARN [sync.0] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5001 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:32987,DS-b5fce8d7-6950-4ed9-8038-80e4122850a3,DISK], DatanodeInfoWithStorage[127.0.0.1:41157,DS-eca73c4f-cc00-4f04-ab77-d977847d74f6,DISK]] 2023-06-05 17:53:46,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33549] regionserver.HRegion(9158): Flush requested on 4d3d7ae7a4e21b8280bec4e841e2fb3a 2023-06-05 17:53:46,931 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C33549%2C1685987568140:(num 1685987611908) roll requested 2023-06-05 17:53:46,932 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4d3d7ae7a4e21b8280bec4e841e2fb3a 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-05 17:53:48,933 INFO [Listener at localhost.localdomain/44643] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-06-05 17:53:51,935 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 5002 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:32987,DS-b5fce8d7-6950-4ed9-8038-80e4122850a3,DISK], DatanodeInfoWithStorage[127.0.0.1:41157,DS-eca73c4f-cc00-4f04-ab77-d977847d74f6,DISK]] 2023-06-05 17:53:51,935 WARN [sync.1] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5002 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:32987,DS-b5fce8d7-6950-4ed9-8038-80e4122850a3,DISK], DatanodeInfoWithStorage[127.0.0.1:41157,DS-eca73c4f-cc00-4f04-ab77-d977847d74f6,DISK]] 2023-06-05 17:53:51,953 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:32987,DS-b5fce8d7-6950-4ed9-8038-80e4122850a3,DISK], DatanodeInfoWithStorage[127.0.0.1:41157,DS-eca73c4f-cc00-4f04-ab77-d977847d74f6,DISK]] 2023-06-05 17:53:51,953 WARN [sync.2] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:32987,DS-b5fce8d7-6950-4ed9-8038-80e4122850a3,DISK], DatanodeInfoWithStorage[127.0.0.1:41157,DS-eca73c4f-cc00-4f04-ab77-d977847d74f6,DISK]] 2023-06-05 17:53:51,954 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/WALs/jenkins-hbase20.apache.org,33549,1685987568140/jenkins-hbase20.apache.org%2C33549%2C1685987568140.1685987611908 with entries=6, filesize=6.07 KB; new WAL /user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/WALs/jenkins-hbase20.apache.org,33549,1685987568140/jenkins-hbase20.apache.org%2C33549%2C1685987568140.1685987626932 2023-06-05 17:53:51,954 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32987,DS-b5fce8d7-6950-4ed9-8038-80e4122850a3,DISK], DatanodeInfoWithStorage[127.0.0.1:41157,DS-eca73c4f-cc00-4f04-ab77-d977847d74f6,DISK]] 2023-06-05 17:53:51,954 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/WALs/jenkins-hbase20.apache.org,33549,1685987568140/jenkins-hbase20.apache.org%2C33549%2C1685987568140.1685987611908 is not closed yet, will try archiving it next time 2023-06-05 17:53:51,963 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=31 (bloomFilter=true), to=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/.tmp/info/80b1ec26027943f988ab06d1c8104431 2023-06-05 17:53:51,972 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/.tmp/info/80b1ec26027943f988ab06d1c8104431 as hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/80b1ec26027943f988ab06d1c8104431 2023-06-05 17:53:51,980 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/80b1ec26027943f988ab06d1c8104431, entries=7, sequenceid=31, filesize=12.1 K 2023-06-05 17:53:51,983 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 4d3d7ae7a4e21b8280bec4e841e2fb3a in 5052ms, sequenceid=31, compaction requested=true 2023-06-05 17:53:51,984 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4d3d7ae7a4e21b8280bec4e841e2fb3a: 2023-06-05 17:53:51,984 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=36.3 K, sizeToCheck=16.0 K 2023-06-05 17:53:51,984 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-05 17:53:51,984 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/929942e0cb34479587d470a0cbb40b78 because midkey is the same as first or last row 2023-06-05 17:53:51,986 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-05 17:53:51,986 DEBUG [RS:0;jenkins-hbase20:33549-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-05 17:53:51,991 DEBUG [RS:0;jenkins-hbase20:33549-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 37197 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-05 17:53:51,992 DEBUG [RS:0;jenkins-hbase20:33549-shortCompactions-0] regionserver.HStore(1912): 4d3d7ae7a4e21b8280bec4e841e2fb3a/info is initiating minor compaction (all files) 2023-06-05 17:53:51,993 INFO [RS:0;jenkins-hbase20:33549-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 4d3d7ae7a4e21b8280bec4e841e2fb3a/info in TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a. 2023-06-05 17:53:51,993 INFO [RS:0;jenkins-hbase20:33549-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/929942e0cb34479587d470a0cbb40b78, hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/8ddf8a19885241fdb15125bc6fe52756, hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/80b1ec26027943f988ab06d1c8104431] into tmpdir=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/.tmp, totalSize=36.3 K 2023-06-05 17:53:51,994 DEBUG [RS:0;jenkins-hbase20:33549-shortCompactions-0] compactions.Compactor(207): Compacting 929942e0cb34479587d470a0cbb40b78, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1685987580599 2023-06-05 17:53:51,995 DEBUG [RS:0;jenkins-hbase20:33549-shortCompactions-0] compactions.Compactor(207): Compacting 8ddf8a19885241fdb15125bc6fe52756, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=21, earliestPutTs=1685987594654 2023-06-05 17:53:51,996 DEBUG [RS:0;jenkins-hbase20:33549-shortCompactions-0] compactions.Compactor(207): Compacting 80b1ec26027943f988ab06d1c8104431, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=31, earliestPutTs=1685987609500 2023-06-05 17:53:52,025 INFO [RS:0;jenkins-hbase20:33549-shortCompactions-0] throttle.PressureAwareThroughputController(145): 4d3d7ae7a4e21b8280bec4e841e2fb3a#info#compaction#3 average throughput is 10.77 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-05 17:53:52,060 DEBUG [RS:0;jenkins-hbase20:33549-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/.tmp/info/e73751737b8642838742d9fe79dea1ac as hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/e73751737b8642838742d9fe79dea1ac 2023-06-05 17:53:52,075 INFO [RS:0;jenkins-hbase20:33549-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 4d3d7ae7a4e21b8280bec4e841e2fb3a/info of 4d3d7ae7a4e21b8280bec4e841e2fb3a into e73751737b8642838742d9fe79dea1ac(size=27.0 K), total size for store is 27.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-05 17:53:52,075 DEBUG [RS:0;jenkins-hbase20:33549-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 4d3d7ae7a4e21b8280bec4e841e2fb3a: 2023-06-05 17:53:52,075 INFO [RS:0;jenkins-hbase20:33549-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a., storeName=4d3d7ae7a4e21b8280bec4e841e2fb3a/info, priority=13, startTime=1685987631986; duration=0sec 2023-06-05 17:53:52,076 DEBUG [RS:0;jenkins-hbase20:33549-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=27.0 K, sizeToCheck=16.0 K 2023-06-05 17:53:52,076 DEBUG [RS:0;jenkins-hbase20:33549-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-05 17:53:52,077 DEBUG [RS:0;jenkins-hbase20:33549-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/e73751737b8642838742d9fe79dea1ac because midkey is the same as first or last row 2023-06-05 17:53:52,077 DEBUG [RS:0;jenkins-hbase20:33549-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-05 17:54:04,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33549] regionserver.HRegion(9158): Flush requested on 4d3d7ae7a4e21b8280bec4e841e2fb3a 2023-06-05 17:54:04,067 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4d3d7ae7a4e21b8280bec4e841e2fb3a 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-05 17:54:04,094 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=42 (bloomFilter=true), to=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/.tmp/info/63df1810313c4dac8c0ced6cd85304ff 2023-06-05 17:54:04,105 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/.tmp/info/63df1810313c4dac8c0ced6cd85304ff as hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/63df1810313c4dac8c0ced6cd85304ff 2023-06-05 17:54:04,114 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/63df1810313c4dac8c0ced6cd85304ff, entries=7, sequenceid=42, filesize=12.1 K 2023-06-05 17:54:04,116 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 4d3d7ae7a4e21b8280bec4e841e2fb3a in 48ms, sequenceid=42, compaction requested=false 2023-06-05 17:54:04,116 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4d3d7ae7a4e21b8280bec4e841e2fb3a: 2023-06-05 17:54:04,116 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=39.1 K, sizeToCheck=16.0 K 2023-06-05 17:54:04,116 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-05 17:54:04,116 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/e73751737b8642838742d9fe79dea1ac because midkey is the same as first or last row 2023-06-05 17:54:12,080 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-05 17:54:12,081 INFO [Listener at localhost.localdomain/44643] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-06-05 17:54:12,082 DEBUG [Listener at localhost.localdomain/44643] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5f199977 to 127.0.0.1:53414 2023-06-05 17:54:12,082 DEBUG [Listener at localhost.localdomain/44643] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:54:12,084 DEBUG [Listener at localhost.localdomain/44643] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-05 17:54:12,084 DEBUG [Listener at localhost.localdomain/44643] util.JVMClusterUtil(257): Found active master hash=98625475, stopped=false 2023-06-05 17:54:12,084 INFO [Listener at localhost.localdomain/44643] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,39011,1685987567025 2023-06-05 17:54:12,087 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-05 17:54:12,088 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): regionserver:33549-0x101bc66babe0001, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-05 17:54:12,088 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:54:12,088 INFO [Listener at localhost.localdomain/44643] procedure2.ProcedureExecutor(629): Stopping 2023-06-05 17:54:12,089 DEBUG [Listener at localhost.localdomain/44643] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5eba0700 to 127.0.0.1:53414 2023-06-05 17:54:12,089 DEBUG [Listener at localhost.localdomain/44643] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:54:12,089 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33549-0x101bc66babe0001, quorum=127.0.0.1:53414, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:54:12,090 INFO [Listener at localhost.localdomain/44643] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,33549,1685987568140' ***** 2023-06-05 17:54:12,090 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:54:12,090 INFO [Listener at localhost.localdomain/44643] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-05 17:54:12,090 INFO [RS:0;jenkins-hbase20:33549] regionserver.HeapMemoryManager(220): Stopping 2023-06-05 17:54:12,091 INFO [RS:0;jenkins-hbase20:33549] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-05 17:54:12,091 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-05 17:54:12,091 INFO [RS:0;jenkins-hbase20:33549] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-05 17:54:12,091 INFO [RS:0;jenkins-hbase20:33549] regionserver.HRegionServer(3303): Received CLOSE for 4d3d7ae7a4e21b8280bec4e841e2fb3a 2023-06-05 17:54:12,092 INFO [RS:0;jenkins-hbase20:33549] regionserver.HRegionServer(3303): Received CLOSE for 541e119b76ba134a029c42a38b54131d 2023-06-05 17:54:12,092 INFO [RS:0;jenkins-hbase20:33549] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,33549,1685987568140 2023-06-05 17:54:12,092 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 4d3d7ae7a4e21b8280bec4e841e2fb3a, disabling compactions & flushes 2023-06-05 17:54:12,092 DEBUG [RS:0;jenkins-hbase20:33549] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x39b8bcc9 to 127.0.0.1:53414 2023-06-05 17:54:12,093 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a. 2023-06-05 17:54:12,093 DEBUG [RS:0;jenkins-hbase20:33549] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:54:12,093 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a. 2023-06-05 17:54:12,093 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a. after waiting 0 ms 2023-06-05 17:54:12,093 INFO [RS:0;jenkins-hbase20:33549] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-05 17:54:12,093 INFO [RS:0;jenkins-hbase20:33549] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-05 17:54:12,093 INFO [RS:0;jenkins-hbase20:33549] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-05 17:54:12,093 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a. 2023-06-05 17:54:12,093 INFO [RS:0;jenkins-hbase20:33549] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-05 17:54:12,093 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 4d3d7ae7a4e21b8280bec4e841e2fb3a 1/1 column families, dataSize=3.15 KB heapSize=3.63 KB 2023-06-05 17:54:12,093 INFO [RS:0;jenkins-hbase20:33549] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-06-05 17:54:12,094 DEBUG [RS:0;jenkins-hbase20:33549] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 4d3d7ae7a4e21b8280bec4e841e2fb3a=TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a., 541e119b76ba134a029c42a38b54131d=hbase:namespace,,1685987569714.541e119b76ba134a029c42a38b54131d.} 2023-06-05 17:54:12,095 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-05 17:54:12,096 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-05 17:54:12,096 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-05 17:54:12,096 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-05 17:54:12,096 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-05 17:54:12,096 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.87 KB heapSize=5.38 KB 2023-06-05 17:54:12,097 DEBUG [RS:0;jenkins-hbase20:33549] regionserver.HRegionServer(1504): Waiting on 1588230740, 4d3d7ae7a4e21b8280bec4e841e2fb3a, 541e119b76ba134a029c42a38b54131d 2023-06-05 17:54:12,115 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.15 KB at sequenceid=48 (bloomFilter=true), to=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/.tmp/info/a8fb56bd0c5445c0b3ffef1d64239ad9 2023-06-05 17:54:12,117 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.64 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/1588230740/.tmp/info/4fb48ebf67344ef6b2d814f5bfb236b3 2023-06-05 17:54:12,125 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/.tmp/info/a8fb56bd0c5445c0b3ffef1d64239ad9 as hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/a8fb56bd0c5445c0b3ffef1d64239ad9 2023-06-05 17:54:12,134 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/a8fb56bd0c5445c0b3ffef1d64239ad9, entries=3, sequenceid=48, filesize=7.9 K 2023-06-05 17:54:12,140 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.15 KB/3228, heapSize ~3.61 KB/3696, currentSize=0 B/0 for 4d3d7ae7a4e21b8280bec4e841e2fb3a in 47ms, sequenceid=48, compaction requested=true 2023-06-05 17:54:12,144 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/929942e0cb34479587d470a0cbb40b78, hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/8ddf8a19885241fdb15125bc6fe52756, hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/80b1ec26027943f988ab06d1c8104431] to archive 2023-06-05 17:54:12,146 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=232 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/1588230740/.tmp/table/850c8d2c461e4750ac1c1e209a638c50 2023-06-05 17:54:12,147 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-05 17:54:12,152 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/929942e0cb34479587d470a0cbb40b78 to hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/archive/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/929942e0cb34479587d470a0cbb40b78 2023-06-05 17:54:12,155 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/8ddf8a19885241fdb15125bc6fe52756 to hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/archive/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/8ddf8a19885241fdb15125bc6fe52756 2023-06-05 17:54:12,155 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/1588230740/.tmp/info/4fb48ebf67344ef6b2d814f5bfb236b3 as hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/1588230740/info/4fb48ebf67344ef6b2d814f5bfb236b3 2023-06-05 17:54:12,157 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/80b1ec26027943f988ab06d1c8104431 to hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/archive/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/info/80b1ec26027943f988ab06d1c8104431 2023-06-05 17:54:12,164 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/1588230740/info/4fb48ebf67344ef6b2d814f5bfb236b3, entries=20, sequenceid=14, filesize=7.4 K 2023-06-05 17:54:12,165 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/1588230740/.tmp/table/850c8d2c461e4750ac1c1e209a638c50 as hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/1588230740/table/850c8d2c461e4750ac1c1e209a638c50 2023-06-05 17:54:12,173 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/1588230740/table/850c8d2c461e4750ac1c1e209a638c50, entries=4, sequenceid=14, filesize=4.8 K 2023-06-05 17:54:12,174 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.87 KB/2938, heapSize ~5.09 KB/5216, currentSize=0 B/0 for 1588230740 in 78ms, sequenceid=14, compaction requested=false 2023-06-05 17:54:12,179 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-06-05 17:54:12,179 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-06-05 17:54:12,186 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-06-05 17:54:12,187 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-05 17:54:12,188 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-05 17:54:12,188 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-05 17:54:12,189 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-05 17:54:12,189 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/default/TestLogRolling-testSlowSyncLogRolling/4d3d7ae7a4e21b8280bec4e841e2fb3a/recovered.edits/51.seqid, newMaxSeqId=51, maxSeqId=1 2023-06-05 17:54:12,190 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a. 2023-06-05 17:54:12,191 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 4d3d7ae7a4e21b8280bec4e841e2fb3a: 2023-06-05 17:54:12,191 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testSlowSyncLogRolling,,1685987570545.4d3d7ae7a4e21b8280bec4e841e2fb3a. 2023-06-05 17:54:12,192 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 541e119b76ba134a029c42a38b54131d, disabling compactions & flushes 2023-06-05 17:54:12,192 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685987569714.541e119b76ba134a029c42a38b54131d. 2023-06-05 17:54:12,192 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685987569714.541e119b76ba134a029c42a38b54131d. 2023-06-05 17:54:12,192 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685987569714.541e119b76ba134a029c42a38b54131d. after waiting 0 ms 2023-06-05 17:54:12,192 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685987569714.541e119b76ba134a029c42a38b54131d. 2023-06-05 17:54:12,192 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 541e119b76ba134a029c42a38b54131d 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-05 17:54:12,207 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/namespace/541e119b76ba134a029c42a38b54131d/.tmp/info/6bbf12e1235644d8b05519eb788bfe51 2023-06-05 17:54:12,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/namespace/541e119b76ba134a029c42a38b54131d/.tmp/info/6bbf12e1235644d8b05519eb788bfe51 as hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/namespace/541e119b76ba134a029c42a38b54131d/info/6bbf12e1235644d8b05519eb788bfe51 2023-06-05 17:54:12,224 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/namespace/541e119b76ba134a029c42a38b54131d/info/6bbf12e1235644d8b05519eb788bfe51, entries=2, sequenceid=6, filesize=4.8 K 2023-06-05 17:54:12,225 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 541e119b76ba134a029c42a38b54131d in 33ms, sequenceid=6, compaction requested=false 2023-06-05 17:54:12,231 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/data/hbase/namespace/541e119b76ba134a029c42a38b54131d/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-05 17:54:12,232 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685987569714.541e119b76ba134a029c42a38b54131d. 2023-06-05 17:54:12,232 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 541e119b76ba134a029c42a38b54131d: 2023-06-05 17:54:12,232 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685987569714.541e119b76ba134a029c42a38b54131d. 2023-06-05 17:54:12,297 INFO [RS:0;jenkins-hbase20:33549] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,33549,1685987568140; all regions closed. 2023-06-05 17:54:12,299 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/WALs/jenkins-hbase20.apache.org,33549,1685987568140 2023-06-05 17:54:12,311 DEBUG [RS:0;jenkins-hbase20:33549] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/oldWALs 2023-06-05 17:54:12,311 INFO [RS:0;jenkins-hbase20:33549] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C33549%2C1685987568140.meta:.meta(num 1685987569486) 2023-06-05 17:54:12,312 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/WALs/jenkins-hbase20.apache.org,33549,1685987568140 2023-06-05 17:54:12,323 DEBUG [RS:0;jenkins-hbase20:33549] wal.AbstractFSWAL(1028): Moved 3 WAL file(s) to /user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/oldWALs 2023-06-05 17:54:12,324 INFO [RS:0;jenkins-hbase20:33549] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C33549%2C1685987568140:(num 1685987626932) 2023-06-05 17:54:12,324 DEBUG [RS:0;jenkins-hbase20:33549] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:54:12,324 INFO [RS:0;jenkins-hbase20:33549] regionserver.LeaseManager(133): Closed leases 2023-06-05 17:54:12,324 INFO [RS:0;jenkins-hbase20:33549] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-05 17:54:12,325 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-05 17:54:12,325 INFO [RS:0;jenkins-hbase20:33549] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:33549 2023-06-05 17:54:12,331 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-05 17:54:12,331 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): regionserver:33549-0x101bc66babe0001, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,33549,1685987568140 2023-06-05 17:54:12,331 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): regionserver:33549-0x101bc66babe0001, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-05 17:54:12,332 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,33549,1685987568140] 2023-06-05 17:54:12,332 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,33549,1685987568140; numProcessing=1 2023-06-05 17:54:12,333 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,33549,1685987568140 already deleted, retry=false 2023-06-05 17:54:12,333 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,33549,1685987568140 expired; onlineServers=0 2023-06-05 17:54:12,333 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,39011,1685987567025' ***** 2023-06-05 17:54:12,333 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-05 17:54:12,333 DEBUG [M:0;jenkins-hbase20:39011] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@44ddedf8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-05 17:54:12,333 INFO [M:0;jenkins-hbase20:39011] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,39011,1685987567025 2023-06-05 17:54:12,333 INFO [M:0;jenkins-hbase20:39011] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,39011,1685987567025; all regions closed. 2023-06-05 17:54:12,333 DEBUG [M:0;jenkins-hbase20:39011] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:54:12,334 DEBUG [M:0;jenkins-hbase20:39011] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-05 17:54:12,334 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-05 17:54:12,335 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685987568989] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685987568989,5,FailOnTimeoutGroup] 2023-06-05 17:54:12,335 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685987568991] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685987568991,5,FailOnTimeoutGroup] 2023-06-05 17:54:12,335 DEBUG [M:0;jenkins-hbase20:39011] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-05 17:54:12,336 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-05 17:54:12,336 INFO [M:0;jenkins-hbase20:39011] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-05 17:54:12,336 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:54:12,336 INFO [M:0;jenkins-hbase20:39011] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-05 17:54:12,336 INFO [M:0;jenkins-hbase20:39011] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-06-05 17:54:12,336 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-05 17:54:12,336 DEBUG [M:0;jenkins-hbase20:39011] master.HMaster(1512): Stopping service threads 2023-06-05 17:54:12,336 INFO [M:0;jenkins-hbase20:39011] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-05 17:54:12,337 INFO [M:0;jenkins-hbase20:39011] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-05 17:54:12,337 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-05 17:54:12,337 DEBUG [M:0;jenkins-hbase20:39011] zookeeper.ZKUtil(398): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-05 17:54:12,338 WARN [M:0;jenkins-hbase20:39011] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-05 17:54:12,338 INFO [M:0;jenkins-hbase20:39011] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-05 17:54:12,338 INFO [M:0;jenkins-hbase20:39011] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-05 17:54:12,338 DEBUG [M:0;jenkins-hbase20:39011] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-05 17:54:12,338 INFO [M:0;jenkins-hbase20:39011] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:54:12,338 DEBUG [M:0;jenkins-hbase20:39011] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:54:12,338 DEBUG [M:0;jenkins-hbase20:39011] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-05 17:54:12,338 DEBUG [M:0;jenkins-hbase20:39011] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:54:12,339 INFO [M:0;jenkins-hbase20:39011] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.31 KB heapSize=46.76 KB 2023-06-05 17:54:12,353 INFO [M:0;jenkins-hbase20:39011] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.31 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/61f7d1b3d183433e8aef1d1c98d6377e 2023-06-05 17:54:12,358 INFO [M:0;jenkins-hbase20:39011] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 61f7d1b3d183433e8aef1d1c98d6377e 2023-06-05 17:54:12,360 DEBUG [M:0;jenkins-hbase20:39011] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/61f7d1b3d183433e8aef1d1c98d6377e as hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/61f7d1b3d183433e8aef1d1c98d6377e 2023-06-05 17:54:12,365 INFO [M:0;jenkins-hbase20:39011] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 61f7d1b3d183433e8aef1d1c98d6377e 2023-06-05 17:54:12,365 INFO [M:0;jenkins-hbase20:39011] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/61f7d1b3d183433e8aef1d1c98d6377e, entries=11, sequenceid=100, filesize=6.1 K 2023-06-05 17:54:12,366 INFO [M:0;jenkins-hbase20:39011] regionserver.HRegion(2948): Finished flush of dataSize ~38.31 KB/39234, heapSize ~46.74 KB/47864, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 28ms, sequenceid=100, compaction requested=false 2023-06-05 17:54:12,367 INFO [M:0;jenkins-hbase20:39011] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:54:12,368 DEBUG [M:0;jenkins-hbase20:39011] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-05 17:54:12,368 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/MasterData/WALs/jenkins-hbase20.apache.org,39011,1685987567025 2023-06-05 17:54:12,372 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-05 17:54:12,373 INFO [M:0;jenkins-hbase20:39011] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-05 17:54:12,373 INFO [M:0;jenkins-hbase20:39011] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:39011 2023-06-05 17:54:12,374 DEBUG [M:0;jenkins-hbase20:39011] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,39011,1685987567025 already deleted, retry=false 2023-06-05 17:54:12,432 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): regionserver:33549-0x101bc66babe0001, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:54:12,432 INFO [RS:0;jenkins-hbase20:33549] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,33549,1685987568140; zookeeper connection closed. 2023-06-05 17:54:12,432 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): regionserver:33549-0x101bc66babe0001, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:54:12,434 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@fc63fa9] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@fc63fa9 2023-06-05 17:54:12,434 INFO [Listener at localhost.localdomain/44643] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-05 17:54:12,532 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:54:12,532 INFO [M:0;jenkins-hbase20:39011] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,39011,1685987567025; zookeeper connection closed. 2023-06-05 17:54:12,533 DEBUG [Listener at localhost.localdomain/44643-EventThread] zookeeper.ZKWatcher(600): master:39011-0x101bc66babe0000, quorum=127.0.0.1:53414, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:54:12,537 WARN [Listener at localhost.localdomain/44643] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-05 17:54:12,543 INFO [Listener at localhost.localdomain/44643] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-05 17:54:12,654 WARN [BP-1757257502-148.251.75.209-1685987564273 heartbeating to localhost.localdomain/127.0.0.1:41259] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-05 17:54:12,654 WARN [BP-1757257502-148.251.75.209-1685987564273 heartbeating to localhost.localdomain/127.0.0.1:41259] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1757257502-148.251.75.209-1685987564273 (Datanode Uuid 171a7647-648b-42fa-be2e-49b7e3a0a049) service to localhost.localdomain/127.0.0.1:41259 2023-06-05 17:54:12,657 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/cluster_d0f05671-cfcb-ff9b-cd2a-fcb1c31ffa1d/dfs/data/data3/current/BP-1757257502-148.251.75.209-1685987564273] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:54:12,658 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/cluster_d0f05671-cfcb-ff9b-cd2a-fcb1c31ffa1d/dfs/data/data4/current/BP-1757257502-148.251.75.209-1685987564273] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:54:12,659 WARN [Listener at localhost.localdomain/44643] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-05 17:54:12,662 INFO [Listener at localhost.localdomain/44643] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-05 17:54:12,770 WARN [BP-1757257502-148.251.75.209-1685987564273 heartbeating to localhost.localdomain/127.0.0.1:41259] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-05 17:54:12,770 WARN [BP-1757257502-148.251.75.209-1685987564273 heartbeating to localhost.localdomain/127.0.0.1:41259] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1757257502-148.251.75.209-1685987564273 (Datanode Uuid 4dcfca39-5b55-45fb-aab3-94c03fb76fcc) service to localhost.localdomain/127.0.0.1:41259 2023-06-05 17:54:12,771 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/cluster_d0f05671-cfcb-ff9b-cd2a-fcb1c31ffa1d/dfs/data/data1/current/BP-1757257502-148.251.75.209-1685987564273] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:54:12,772 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/cluster_d0f05671-cfcb-ff9b-cd2a-fcb1c31ffa1d/dfs/data/data2/current/BP-1757257502-148.251.75.209-1685987564273] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:54:12,805 INFO [Listener at localhost.localdomain/44643] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-06-05 17:54:12,923 INFO [Listener at localhost.localdomain/44643] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-05 17:54:12,956 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-05 17:54:12,966 INFO [Listener at localhost.localdomain/44643] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=51 (was 10) Potentially hanging thread: nioEventLoopGroup-3-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase20:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Potentially hanging thread: IPC Parameter Sending Thread #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase20:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.0@localhost.localdomain:41259 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.PeerCache@4f29ccee java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SnapshotHandlerChoreCleaner sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/44643 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-2 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcClient-timer-pool-0 java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:600) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:496) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1624275195) connection to localhost.localdomain/127.0.0.1:41259 from jenkins.hfs.0 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:41259 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1624275195) connection to localhost.localdomain/127.0.0.1:41259 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-4-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase20:0.leaseChecker java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.regionserver.LeaseManager.run(LeaseManager.java:82) Potentially hanging thread: IPC Client (1624275195) connection to localhost.localdomain/127.0.0.1:41259 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3693) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Idle-Rpc-Conn-Sweeper-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase20:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HBase-Metrics2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SessionTracker java.lang.Thread.sleep(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:151) Potentially hanging thread: Monitor thread for TaskMonitor java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:327) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=439 (was 263) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=106 (was 327), ProcessCount=169 (was 170), AvailableMemoryMB=7137 (was 8264) 2023-06-05 17:54:12,973 INFO [Listener at localhost.localdomain/44643] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=52, OpenFileDescriptor=439, MaxFileDescriptor=60000, SystemLoadAverage=106, ProcessCount=169, AvailableMemoryMB=7137 2023-06-05 17:54:12,974 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-05 17:54:12,974 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/hadoop.log.dir so I do NOT create it in target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d 2023-06-05 17:54:12,974 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4664feb1-edee-7558-5783-efc797792abb/hadoop.tmp.dir so I do NOT create it in target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d 2023-06-05 17:54:12,974 INFO [Listener at localhost.localdomain/44643] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58, deleteOnExit=true 2023-06-05 17:54:12,974 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-05 17:54:12,974 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/test.cache.data in system properties and HBase conf 2023-06-05 17:54:12,974 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/hadoop.tmp.dir in system properties and HBase conf 2023-06-05 17:54:12,975 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/hadoop.log.dir in system properties and HBase conf 2023-06-05 17:54:12,975 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-05 17:54:12,975 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-05 17:54:12,975 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-05 17:54:12,975 DEBUG [Listener at localhost.localdomain/44643] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-05 17:54:12,975 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-05 17:54:12,975 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-05 17:54:12,976 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-05 17:54:12,976 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-05 17:54:12,976 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-05 17:54:12,976 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-05 17:54:12,976 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-05 17:54:12,976 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-05 17:54:12,976 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-05 17:54:12,976 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/nfs.dump.dir in system properties and HBase conf 2023-06-05 17:54:12,976 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/java.io.tmpdir in system properties and HBase conf 2023-06-05 17:54:12,977 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-05 17:54:12,977 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-05 17:54:12,977 INFO [Listener at localhost.localdomain/44643] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-05 17:54:12,978 WARN [Listener at localhost.localdomain/44643] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-05 17:54:12,980 WARN [Listener at localhost.localdomain/44643] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-05 17:54:12,980 WARN [Listener at localhost.localdomain/44643] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-05 17:54:13,008 WARN [Listener at localhost.localdomain/44643] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:54:13,010 INFO [Listener at localhost.localdomain/44643] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:54:13,015 INFO [Listener at localhost.localdomain/44643] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/java.io.tmpdir/Jetty_localhost_localdomain_41901_hdfs____z2329j/webapp 2023-06-05 17:54:13,088 INFO [Listener at localhost.localdomain/44643] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:41901 2023-06-05 17:54:13,089 WARN [Listener at localhost.localdomain/44643] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-05 17:54:13,091 WARN [Listener at localhost.localdomain/44643] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-05 17:54:13,091 WARN [Listener at localhost.localdomain/44643] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-05 17:54:13,119 WARN [Listener at localhost.localdomain/44693] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:54:13,129 WARN [Listener at localhost.localdomain/44693] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-05 17:54:13,132 WARN [Listener at localhost.localdomain/44693] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:54:13,133 INFO [Listener at localhost.localdomain/44693] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:54:13,139 INFO [Listener at localhost.localdomain/44693] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/java.io.tmpdir/Jetty_localhost_46823_datanode____.bejf1e/webapp 2023-06-05 17:54:13,168 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-05 17:54:13,212 INFO [Listener at localhost.localdomain/44693] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46823 2023-06-05 17:54:13,218 WARN [Listener at localhost.localdomain/40855] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:54:13,229 WARN [Listener at localhost.localdomain/40855] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-05 17:54:13,232 WARN [Listener at localhost.localdomain/40855] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:54:13,233 INFO [Listener at localhost.localdomain/40855] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:54:13,237 INFO [Listener at localhost.localdomain/40855] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/java.io.tmpdir/Jetty_localhost_46321_datanode____fp9n87/webapp 2023-06-05 17:54:13,300 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb1bd155a8c997e78: Processing first storage report for DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698 from datanode e310359b-f2bb-4840-aadc-4dc77d3a603f 2023-06-05 17:54:13,304 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb1bd155a8c997e78: from storage DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698 node DatanodeRegistration(127.0.0.1:40493, datanodeUuid=e310359b-f2bb-4840-aadc-4dc77d3a603f, infoPort=39967, infoSecurePort=0, ipcPort=40855, storageInfo=lv=-57;cid=testClusterID;nsid=1790368170;c=1685987652982), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:54:13,304 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb1bd155a8c997e78: Processing first storage report for DS-d15ca5e8-00c3-4ffc-ad02-c02c5ca59aed from datanode e310359b-f2bb-4840-aadc-4dc77d3a603f 2023-06-05 17:54:13,304 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb1bd155a8c997e78: from storage DS-d15ca5e8-00c3-4ffc-ad02-c02c5ca59aed node DatanodeRegistration(127.0.0.1:40493, datanodeUuid=e310359b-f2bb-4840-aadc-4dc77d3a603f, infoPort=39967, infoSecurePort=0, ipcPort=40855, storageInfo=lv=-57;cid=testClusterID;nsid=1790368170;c=1685987652982), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:54:13,336 INFO [Listener at localhost.localdomain/40855] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46321 2023-06-05 17:54:13,346 WARN [Listener at localhost.localdomain/37547] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:54:13,421 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfd99bdc6425c5a83: Processing first storage report for DS-fd894fcc-241e-4914-9baf-f6c26f4e049d from datanode 26e04d54-7a51-4f38-b10a-c6db5ceed5e6 2023-06-05 17:54:13,421 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfd99bdc6425c5a83: from storage DS-fd894fcc-241e-4914-9baf-f6c26f4e049d node DatanodeRegistration(127.0.0.1:42883, datanodeUuid=26e04d54-7a51-4f38-b10a-c6db5ceed5e6, infoPort=34151, infoSecurePort=0, ipcPort=37547, storageInfo=lv=-57;cid=testClusterID;nsid=1790368170;c=1685987652982), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:54:13,421 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfd99bdc6425c5a83: Processing first storage report for DS-e7ba68a8-d710-4457-854c-dc02d09aa3ac from datanode 26e04d54-7a51-4f38-b10a-c6db5ceed5e6 2023-06-05 17:54:13,421 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfd99bdc6425c5a83: from storage DS-e7ba68a8-d710-4457-854c-dc02d09aa3ac node DatanodeRegistration(127.0.0.1:42883, datanodeUuid=26e04d54-7a51-4f38-b10a-c6db5ceed5e6, infoPort=34151, infoSecurePort=0, ipcPort=37547, storageInfo=lv=-57;cid=testClusterID;nsid=1790368170;c=1685987652982), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-05 17:54:13,458 DEBUG [Listener at localhost.localdomain/37547] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d 2023-06-05 17:54:13,461 INFO [Listener at localhost.localdomain/37547] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/zookeeper_0, clientPort=53420, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-05 17:54:13,463 INFO [Listener at localhost.localdomain/37547] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=53420 2023-06-05 17:54:13,463 INFO [Listener at localhost.localdomain/37547] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:54:13,464 INFO [Listener at localhost.localdomain/37547] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:54:13,483 INFO [Listener at localhost.localdomain/37547] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96 with version=8 2023-06-05 17:54:13,483 INFO [Listener at localhost.localdomain/37547] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/hbase-staging 2023-06-05 17:54:13,485 INFO [Listener at localhost.localdomain/37547] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-06-05 17:54:13,485 INFO [Listener at localhost.localdomain/37547] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:54:13,486 INFO [Listener at localhost.localdomain/37547] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-05 17:54:13,486 INFO [Listener at localhost.localdomain/37547] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-05 17:54:13,486 INFO [Listener at localhost.localdomain/37547] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:54:13,486 INFO [Listener at localhost.localdomain/37547] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-05 17:54:13,486 INFO [Listener at localhost.localdomain/37547] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-05 17:54:13,488 INFO [Listener at localhost.localdomain/37547] ipc.NettyRpcServer(120): Bind to /148.251.75.209:44127 2023-06-05 17:54:13,488 INFO [Listener at localhost.localdomain/37547] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:54:13,489 INFO [Listener at localhost.localdomain/37547] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:54:13,491 INFO [Listener at localhost.localdomain/37547] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44127 connecting to ZooKeeper ensemble=127.0.0.1:53420 2023-06-05 17:54:13,495 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:441270x0, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-05 17:54:13,496 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44127-0x101bc680f800000 connected 2023-06-05 17:54:13,514 DEBUG [Listener at localhost.localdomain/37547] zookeeper.ZKUtil(164): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-05 17:54:13,514 DEBUG [Listener at localhost.localdomain/37547] zookeeper.ZKUtil(164): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:54:13,515 DEBUG [Listener at localhost.localdomain/37547] zookeeper.ZKUtil(164): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-05 17:54:13,517 DEBUG [Listener at localhost.localdomain/37547] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44127 2023-06-05 17:54:13,518 DEBUG [Listener at localhost.localdomain/37547] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44127 2023-06-05 17:54:13,518 DEBUG [Listener at localhost.localdomain/37547] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44127 2023-06-05 17:54:13,522 DEBUG [Listener at localhost.localdomain/37547] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44127 2023-06-05 17:54:13,523 DEBUG [Listener at localhost.localdomain/37547] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44127 2023-06-05 17:54:13,523 INFO [Listener at localhost.localdomain/37547] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96, hbase.cluster.distributed=false 2023-06-05 17:54:13,534 INFO [Listener at localhost.localdomain/37547] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-06-05 17:54:13,535 INFO [Listener at localhost.localdomain/37547] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:54:13,535 INFO [Listener at localhost.localdomain/37547] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-05 17:54:13,535 INFO [Listener at localhost.localdomain/37547] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-05 17:54:13,535 INFO [Listener at localhost.localdomain/37547] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:54:13,535 INFO [Listener at localhost.localdomain/37547] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-05 17:54:13,535 INFO [Listener at localhost.localdomain/37547] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-05 17:54:13,537 INFO [Listener at localhost.localdomain/37547] ipc.NettyRpcServer(120): Bind to /148.251.75.209:36201 2023-06-05 17:54:13,537 INFO [Listener at localhost.localdomain/37547] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-05 17:54:13,538 DEBUG [Listener at localhost.localdomain/37547] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-05 17:54:13,539 INFO [Listener at localhost.localdomain/37547] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:54:13,540 INFO [Listener at localhost.localdomain/37547] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:54:13,541 INFO [Listener at localhost.localdomain/37547] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36201 connecting to ZooKeeper ensemble=127.0.0.1:53420 2023-06-05 17:54:13,544 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): regionserver:362010x0, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-05 17:54:13,545 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36201-0x101bc680f800001 connected 2023-06-05 17:54:13,545 DEBUG [Listener at localhost.localdomain/37547] zookeeper.ZKUtil(164): regionserver:36201-0x101bc680f800001, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-05 17:54:13,545 DEBUG [Listener at localhost.localdomain/37547] zookeeper.ZKUtil(164): regionserver:36201-0x101bc680f800001, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:54:13,546 DEBUG [Listener at localhost.localdomain/37547] zookeeper.ZKUtil(164): regionserver:36201-0x101bc680f800001, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-05 17:54:13,547 DEBUG [Listener at localhost.localdomain/37547] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36201 2023-06-05 17:54:13,547 DEBUG [Listener at localhost.localdomain/37547] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36201 2023-06-05 17:54:13,548 DEBUG [Listener at localhost.localdomain/37547] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36201 2023-06-05 17:54:13,548 DEBUG [Listener at localhost.localdomain/37547] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36201 2023-06-05 17:54:13,548 DEBUG [Listener at localhost.localdomain/37547] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36201 2023-06-05 17:54:13,549 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,44127,1685987653485 2023-06-05 17:54:13,564 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-05 17:54:13,565 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,44127,1685987653485 2023-06-05 17:54:13,568 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-05 17:54:13,568 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): regionserver:36201-0x101bc680f800001, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-05 17:54:13,568 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:54:13,569 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-05 17:54:13,570 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-05 17:54:13,570 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,44127,1685987653485 from backup master directory 2023-06-05 17:54:13,571 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,44127,1685987653485 2023-06-05 17:54:13,571 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-05 17:54:13,571 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-05 17:54:13,572 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,44127,1685987653485 2023-06-05 17:54:13,592 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/hbase.id with ID: 93fa0aee-50db-40f6-8d97-80ebe69dc6e6 2023-06-05 17:54:13,608 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:54:13,611 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:54:13,625 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x4dbfa9d1 to 127.0.0.1:53420 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-05 17:54:13,629 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@37497f7c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-05 17:54:13,629 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-05 17:54:13,630 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-05 17:54:13,631 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-05 17:54:13,632 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/MasterData/data/master/store-tmp 2023-06-05 17:54:13,642 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:54:13,642 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-05 17:54:13,642 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:54:13,642 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:54:13,642 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-05 17:54:13,642 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:54:13,642 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:54:13,642 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-05 17:54:13,643 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/MasterData/WALs/jenkins-hbase20.apache.org,44127,1685987653485 2023-06-05 17:54:13,646 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44127%2C1685987653485, suffix=, logDir=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/MasterData/WALs/jenkins-hbase20.apache.org,44127,1685987653485, archiveDir=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/MasterData/oldWALs, maxLogs=10 2023-06-05 17:54:13,654 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/MasterData/WALs/jenkins-hbase20.apache.org,44127,1685987653485/jenkins-hbase20.apache.org%2C44127%2C1685987653485.1685987653646 2023-06-05 17:54:13,654 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42883,DS-fd894fcc-241e-4914-9baf-f6c26f4e049d,DISK], DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK]] 2023-06-05 17:54:13,654 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-05 17:54:13,654 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:54:13,654 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:54:13,655 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:54:13,657 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:54:13,659 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-05 17:54:13,660 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-05 17:54:13,660 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:54:13,662 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:54:13,664 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:54:13,668 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:54:13,672 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-05 17:54:13,673 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=839383, jitterRate=0.0673314779996872}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-05 17:54:13,673 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-05 17:54:13,673 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-05 17:54:13,675 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-05 17:54:13,676 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-05 17:54:13,676 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-05 17:54:13,677 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-05 17:54:13,678 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-05 17:54:13,678 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-05 17:54:13,680 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-05 17:54:13,682 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-05 17:54:13,695 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-05 17:54:13,696 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-05 17:54:13,696 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-05 17:54:13,696 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-05 17:54:13,697 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-05 17:54:13,699 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:54:13,700 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-05 17:54:13,700 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-05 17:54:13,701 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-05 17:54:13,702 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): regionserver:36201-0x101bc680f800001, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-05 17:54:13,702 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-05 17:54:13,702 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:54:13,703 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,44127,1685987653485, sessionid=0x101bc680f800000, setting cluster-up flag (Was=false) 2023-06-05 17:54:13,706 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:54:13,709 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-05 17:54:13,710 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,44127,1685987653485 2023-06-05 17:54:13,712 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:54:13,715 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-05 17:54:13,716 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,44127,1685987653485 2023-06-05 17:54:13,717 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/.hbase-snapshot/.tmp 2023-06-05 17:54:13,720 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-05 17:54:13,720 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-05 17:54:13,720 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-05 17:54:13,720 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-05 17:54:13,720 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-05 17:54:13,720 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-06-05 17:54:13,720 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:54:13,720 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-05 17:54:13,720 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:54:13,722 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685987683722 2023-06-05 17:54:13,723 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-05 17:54:13,723 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-05 17:54:13,723 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-05 17:54:13,723 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-05 17:54:13,723 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-05 17:54:13,723 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-05 17:54:13,724 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-05 17:54:13,724 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-05 17:54:13,724 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-05 17:54:13,724 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-05 17:54:13,724 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-05 17:54:13,725 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-05 17:54:13,725 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-05 17:54:13,725 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-05 17:54:13,725 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685987653725,5,FailOnTimeoutGroup] 2023-06-05 17:54:13,725 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685987653725,5,FailOnTimeoutGroup] 2023-06-05 17:54:13,725 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-05 17:54:13,725 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-05 17:54:13,725 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-05 17:54:13,725 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-05 17:54:13,726 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-05 17:54:13,739 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-05 17:54:13,740 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-05 17:54:13,740 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96 2023-06-05 17:54:13,751 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:54:13,751 INFO [RS:0;jenkins-hbase20:36201] regionserver.HRegionServer(951): ClusterId : 93fa0aee-50db-40f6-8d97-80ebe69dc6e6 2023-06-05 17:54:13,752 DEBUG [RS:0;jenkins-hbase20:36201] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-05 17:54:13,753 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-05 17:54:13,754 DEBUG [RS:0;jenkins-hbase20:36201] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-05 17:54:13,754 DEBUG [RS:0;jenkins-hbase20:36201] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-05 17:54:13,755 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/hbase/meta/1588230740/info 2023-06-05 17:54:13,756 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-05 17:54:13,756 DEBUG [RS:0;jenkins-hbase20:36201] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-05 17:54:13,758 DEBUG [RS:0;jenkins-hbase20:36201] zookeeper.ReadOnlyZKClient(139): Connect 0x79134eab to 127.0.0.1:53420 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-05 17:54:13,758 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:54:13,758 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-05 17:54:13,761 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/hbase/meta/1588230740/rep_barrier 2023-06-05 17:54:13,762 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-05 17:54:13,762 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:54:13,762 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-05 17:54:13,764 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/hbase/meta/1588230740/table 2023-06-05 17:54:13,765 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-05 17:54:13,765 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:54:13,767 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/hbase/meta/1588230740 2023-06-05 17:54:13,768 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/hbase/meta/1588230740 2023-06-05 17:54:13,769 DEBUG [RS:0;jenkins-hbase20:36201] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@77d4aff9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-05 17:54:13,769 DEBUG [RS:0;jenkins-hbase20:36201] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@73eb25db, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-05 17:54:13,771 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-05 17:54:13,772 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-05 17:54:13,774 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-05 17:54:13,775 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=764177, jitterRate=-0.02829967439174652}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-05 17:54:13,775 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-05 17:54:13,775 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-05 17:54:13,775 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-05 17:54:13,775 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-05 17:54:13,775 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-05 17:54:13,775 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-05 17:54:13,778 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-05 17:54:13,779 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-05 17:54:13,780 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-05 17:54:13,780 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-05 17:54:13,780 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-05 17:54:13,782 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-05 17:54:13,782 DEBUG [RS:0;jenkins-hbase20:36201] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:36201 2023-06-05 17:54:13,782 INFO [RS:0;jenkins-hbase20:36201] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-05 17:54:13,783 INFO [RS:0;jenkins-hbase20:36201] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-05 17:54:13,783 DEBUG [RS:0;jenkins-hbase20:36201] regionserver.HRegionServer(1022): About to register with Master. 2023-06-05 17:54:13,784 INFO [RS:0;jenkins-hbase20:36201] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,44127,1685987653485 with isa=jenkins-hbase20.apache.org/148.251.75.209:36201, startcode=1685987653534 2023-06-05 17:54:13,784 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-05 17:54:13,784 DEBUG [RS:0;jenkins-hbase20:36201] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-05 17:54:13,788 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:33193, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-06-05 17:54:13,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44127] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,36201,1685987653534 2023-06-05 17:54:13,789 DEBUG [RS:0;jenkins-hbase20:36201] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96 2023-06-05 17:54:13,790 DEBUG [RS:0;jenkins-hbase20:36201] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:44693 2023-06-05 17:54:13,790 DEBUG [RS:0;jenkins-hbase20:36201] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-05 17:54:13,791 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-05 17:54:13,792 DEBUG [RS:0;jenkins-hbase20:36201] zookeeper.ZKUtil(162): regionserver:36201-0x101bc680f800001, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36201,1685987653534 2023-06-05 17:54:13,792 WARN [RS:0;jenkins-hbase20:36201] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-05 17:54:13,792 INFO [RS:0;jenkins-hbase20:36201] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-05 17:54:13,792 DEBUG [RS:0;jenkins-hbase20:36201] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,36201,1685987653534 2023-06-05 17:54:13,792 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,36201,1685987653534] 2023-06-05 17:54:13,797 DEBUG [RS:0;jenkins-hbase20:36201] zookeeper.ZKUtil(162): regionserver:36201-0x101bc680f800001, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36201,1685987653534 2023-06-05 17:54:13,798 DEBUG [RS:0;jenkins-hbase20:36201] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-05 17:54:13,798 INFO [RS:0;jenkins-hbase20:36201] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-05 17:54:13,800 INFO [RS:0;jenkins-hbase20:36201] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-05 17:54:13,802 INFO [RS:0;jenkins-hbase20:36201] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-05 17:54:13,802 INFO [RS:0;jenkins-hbase20:36201] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-05 17:54:13,805 INFO [RS:0;jenkins-hbase20:36201] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-05 17:54:13,807 INFO [RS:0;jenkins-hbase20:36201] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-05 17:54:13,807 DEBUG [RS:0;jenkins-hbase20:36201] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:54:13,807 DEBUG [RS:0;jenkins-hbase20:36201] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:54:13,807 DEBUG [RS:0;jenkins-hbase20:36201] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:54:13,807 DEBUG [RS:0;jenkins-hbase20:36201] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:54:13,807 DEBUG [RS:0;jenkins-hbase20:36201] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:54:13,807 DEBUG [RS:0;jenkins-hbase20:36201] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-05 17:54:13,807 DEBUG [RS:0;jenkins-hbase20:36201] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:54:13,807 DEBUG [RS:0;jenkins-hbase20:36201] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:54:13,807 DEBUG [RS:0;jenkins-hbase20:36201] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:54:13,808 DEBUG [RS:0;jenkins-hbase20:36201] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:54:13,808 INFO [RS:0;jenkins-hbase20:36201] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-05 17:54:13,809 INFO [RS:0;jenkins-hbase20:36201] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-05 17:54:13,809 INFO [RS:0;jenkins-hbase20:36201] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-05 17:54:13,818 INFO [RS:0;jenkins-hbase20:36201] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-05 17:54:13,819 INFO [RS:0;jenkins-hbase20:36201] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36201,1685987653534-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-05 17:54:13,830 INFO [RS:0;jenkins-hbase20:36201] regionserver.Replication(203): jenkins-hbase20.apache.org,36201,1685987653534 started 2023-06-05 17:54:13,830 INFO [RS:0;jenkins-hbase20:36201] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,36201,1685987653534, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:36201, sessionid=0x101bc680f800001 2023-06-05 17:54:13,830 DEBUG [RS:0;jenkins-hbase20:36201] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-05 17:54:13,830 DEBUG [RS:0;jenkins-hbase20:36201] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,36201,1685987653534 2023-06-05 17:54:13,830 DEBUG [RS:0;jenkins-hbase20:36201] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,36201,1685987653534' 2023-06-05 17:54:13,830 DEBUG [RS:0;jenkins-hbase20:36201] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-05 17:54:13,831 DEBUG [RS:0;jenkins-hbase20:36201] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-05 17:54:13,831 DEBUG [RS:0;jenkins-hbase20:36201] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-05 17:54:13,831 DEBUG [RS:0;jenkins-hbase20:36201] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-05 17:54:13,831 DEBUG [RS:0;jenkins-hbase20:36201] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,36201,1685987653534 2023-06-05 17:54:13,831 DEBUG [RS:0;jenkins-hbase20:36201] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,36201,1685987653534' 2023-06-05 17:54:13,831 DEBUG [RS:0;jenkins-hbase20:36201] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-05 17:54:13,832 DEBUG [RS:0;jenkins-hbase20:36201] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-05 17:54:13,832 DEBUG [RS:0;jenkins-hbase20:36201] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-05 17:54:13,832 INFO [RS:0;jenkins-hbase20:36201] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-05 17:54:13,832 INFO [RS:0;jenkins-hbase20:36201] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-05 17:54:13,934 DEBUG [jenkins-hbase20:44127] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-05 17:54:13,935 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,36201,1685987653534, state=OPENING 2023-06-05 17:54:13,936 INFO [RS:0;jenkins-hbase20:36201] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C36201%2C1685987653534, suffix=, logDir=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,36201,1685987653534, archiveDir=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/oldWALs, maxLogs=32 2023-06-05 17:54:13,936 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-05 17:54:13,937 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:54:13,937 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,36201,1685987653534}] 2023-06-05 17:54:13,937 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-05 17:54:13,952 INFO [RS:0;jenkins-hbase20:36201] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,36201,1685987653534/jenkins-hbase20.apache.org%2C36201%2C1685987653534.1685987653939 2023-06-05 17:54:13,952 DEBUG [RS:0;jenkins-hbase20:36201] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42883,DS-fd894fcc-241e-4914-9baf-f6c26f4e049d,DISK], DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK]] 2023-06-05 17:54:14,095 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,36201,1685987653534 2023-06-05 17:54:14,095 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-05 17:54:14,100 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:50222, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-05 17:54:14,108 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-05 17:54:14,108 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-05 17:54:14,111 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C36201%2C1685987653534.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,36201,1685987653534, archiveDir=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/oldWALs, maxLogs=32 2023-06-05 17:54:14,125 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,36201,1685987653534/jenkins-hbase20.apache.org%2C36201%2C1685987653534.meta.1685987654114.meta 2023-06-05 17:54:14,125 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK], DatanodeInfoWithStorage[127.0.0.1:42883,DS-fd894fcc-241e-4914-9baf-f6c26f4e049d,DISK]] 2023-06-05 17:54:14,126 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-05 17:54:14,126 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-05 17:54:14,126 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-05 17:54:14,127 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-05 17:54:14,127 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-05 17:54:14,128 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:54:14,128 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-05 17:54:14,128 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-05 17:54:14,130 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-05 17:54:14,132 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/hbase/meta/1588230740/info 2023-06-05 17:54:14,133 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/hbase/meta/1588230740/info 2023-06-05 17:54:14,133 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-05 17:54:14,134 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:54:14,135 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-05 17:54:14,136 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/hbase/meta/1588230740/rep_barrier 2023-06-05 17:54:14,136 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/hbase/meta/1588230740/rep_barrier 2023-06-05 17:54:14,137 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-05 17:54:14,138 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:54:14,138 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-05 17:54:14,139 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/hbase/meta/1588230740/table 2023-06-05 17:54:14,139 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/hbase/meta/1588230740/table 2023-06-05 17:54:14,141 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-05 17:54:14,142 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:54:14,144 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/hbase/meta/1588230740 2023-06-05 17:54:14,146 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/hbase/meta/1588230740 2023-06-05 17:54:14,149 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-05 17:54:14,151 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-05 17:54:14,152 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=852850, jitterRate=0.08445559442043304}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-05 17:54:14,152 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-05 17:54:14,154 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685987654095 2023-06-05 17:54:14,158 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-05 17:54:14,158 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-05 17:54:14,159 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,36201,1685987653534, state=OPEN 2023-06-05 17:54:14,161 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-05 17:54:14,161 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-05 17:54:14,164 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-05 17:54:14,164 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,36201,1685987653534 in 224 msec 2023-06-05 17:54:14,167 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-05 17:54:14,167 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 384 msec 2023-06-05 17:54:14,170 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 450 msec 2023-06-05 17:54:14,170 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685987654170, completionTime=-1 2023-06-05 17:54:14,170 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-05 17:54:14,170 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-05 17:54:14,173 DEBUG [hconnection-0x24311cb7-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-05 17:54:14,175 INFO [RS-EventLoopGroup-6-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:50226, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-05 17:54:14,176 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-05 17:54:14,177 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685987714177 2023-06-05 17:54:14,177 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685987774177 2023-06-05 17:54:14,177 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-06-05 17:54:14,182 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44127,1685987653485-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-05 17:54:14,182 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44127,1685987653485-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-05 17:54:14,182 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44127,1685987653485-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-05 17:54:14,182 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:44127, period=300000, unit=MILLISECONDS is enabled. 2023-06-05 17:54:14,182 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-05 17:54:14,182 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-05 17:54:14,183 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-05 17:54:14,184 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-05 17:54:14,185 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-05 17:54:14,186 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-05 17:54:14,187 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-05 17:54:14,189 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/.tmp/data/hbase/namespace/ea5dea98f3db6033eda5fa365120d0e4 2023-06-05 17:54:14,190 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/.tmp/data/hbase/namespace/ea5dea98f3db6033eda5fa365120d0e4 empty. 2023-06-05 17:54:14,190 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/.tmp/data/hbase/namespace/ea5dea98f3db6033eda5fa365120d0e4 2023-06-05 17:54:14,190 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-05 17:54:14,203 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-05 17:54:14,204 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => ea5dea98f3db6033eda5fa365120d0e4, NAME => 'hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/.tmp 2023-06-05 17:54:14,216 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:54:14,217 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing ea5dea98f3db6033eda5fa365120d0e4, disabling compactions & flushes 2023-06-05 17:54:14,217 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4. 2023-06-05 17:54:14,217 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4. 2023-06-05 17:54:14,217 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4. after waiting 0 ms 2023-06-05 17:54:14,217 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4. 2023-06-05 17:54:14,217 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4. 2023-06-05 17:54:14,217 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for ea5dea98f3db6033eda5fa365120d0e4: 2023-06-05 17:54:14,221 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-05 17:54:14,223 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685987654223"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685987654223"}]},"ts":"1685987654223"} 2023-06-05 17:54:14,226 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-05 17:54:14,227 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-05 17:54:14,228 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685987654227"}]},"ts":"1685987654227"} 2023-06-05 17:54:14,230 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-05 17:54:14,234 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=ea5dea98f3db6033eda5fa365120d0e4, ASSIGN}] 2023-06-05 17:54:14,237 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=ea5dea98f3db6033eda5fa365120d0e4, ASSIGN 2023-06-05 17:54:14,239 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=ea5dea98f3db6033eda5fa365120d0e4, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,36201,1685987653534; forceNewPlan=false, retain=false 2023-06-05 17:54:14,390 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=ea5dea98f3db6033eda5fa365120d0e4, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36201,1685987653534 2023-06-05 17:54:14,390 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685987654390"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685987654390"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685987654390"}]},"ts":"1685987654390"} 2023-06-05 17:54:14,393 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure ea5dea98f3db6033eda5fa365120d0e4, server=jenkins-hbase20.apache.org,36201,1685987653534}] 2023-06-05 17:54:14,557 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4. 2023-06-05 17:54:14,557 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ea5dea98f3db6033eda5fa365120d0e4, NAME => 'hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4.', STARTKEY => '', ENDKEY => ''} 2023-06-05 17:54:14,557 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace ea5dea98f3db6033eda5fa365120d0e4 2023-06-05 17:54:14,558 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:54:14,558 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for ea5dea98f3db6033eda5fa365120d0e4 2023-06-05 17:54:14,558 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for ea5dea98f3db6033eda5fa365120d0e4 2023-06-05 17:54:14,560 INFO [StoreOpener-ea5dea98f3db6033eda5fa365120d0e4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region ea5dea98f3db6033eda5fa365120d0e4 2023-06-05 17:54:14,563 DEBUG [StoreOpener-ea5dea98f3db6033eda5fa365120d0e4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/hbase/namespace/ea5dea98f3db6033eda5fa365120d0e4/info 2023-06-05 17:54:14,563 DEBUG [StoreOpener-ea5dea98f3db6033eda5fa365120d0e4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/hbase/namespace/ea5dea98f3db6033eda5fa365120d0e4/info 2023-06-05 17:54:14,564 INFO [StoreOpener-ea5dea98f3db6033eda5fa365120d0e4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ea5dea98f3db6033eda5fa365120d0e4 columnFamilyName info 2023-06-05 17:54:14,565 INFO [StoreOpener-ea5dea98f3db6033eda5fa365120d0e4-1] regionserver.HStore(310): Store=ea5dea98f3db6033eda5fa365120d0e4/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:54:14,567 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/hbase/namespace/ea5dea98f3db6033eda5fa365120d0e4 2023-06-05 17:54:14,568 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/hbase/namespace/ea5dea98f3db6033eda5fa365120d0e4 2023-06-05 17:54:14,572 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for ea5dea98f3db6033eda5fa365120d0e4 2023-06-05 17:54:14,574 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/hbase/namespace/ea5dea98f3db6033eda5fa365120d0e4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-05 17:54:14,575 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened ea5dea98f3db6033eda5fa365120d0e4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=812790, jitterRate=0.03351619839668274}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-05 17:54:14,575 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for ea5dea98f3db6033eda5fa365120d0e4: 2023-06-05 17:54:14,577 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4., pid=6, masterSystemTime=1685987654548 2023-06-05 17:54:14,579 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4. 2023-06-05 17:54:14,579 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4. 2023-06-05 17:54:14,580 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=ea5dea98f3db6033eda5fa365120d0e4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,36201,1685987653534 2023-06-05 17:54:14,580 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685987654580"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685987654580"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685987654580"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685987654580"}]},"ts":"1685987654580"} 2023-06-05 17:54:14,585 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-05 17:54:14,585 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure ea5dea98f3db6033eda5fa365120d0e4, server=jenkins-hbase20.apache.org,36201,1685987653534 in 189 msec 2023-06-05 17:54:14,587 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-05 17:54:14,588 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=ea5dea98f3db6033eda5fa365120d0e4, ASSIGN in 351 msec 2023-06-05 17:54:14,588 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-05 17:54:14,589 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685987654588"}]},"ts":"1685987654588"} 2023-06-05 17:54:14,590 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-05 17:54:14,593 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-05 17:54:14,595 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 410 msec 2023-06-05 17:54:14,685 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-05 17:54:14,686 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-05 17:54:14,686 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:54:14,693 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-05 17:54:14,705 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-05 17:54:14,710 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 17 msec 2023-06-05 17:54:14,716 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-05 17:54:14,728 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-05 17:54:14,735 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 18 msec 2023-06-05 17:54:14,744 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-05 17:54:14,746 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-05 17:54:14,746 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.174sec 2023-06-05 17:54:14,746 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-05 17:54:14,746 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-05 17:54:14,746 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-05 17:54:14,746 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44127,1685987653485-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-05 17:54:14,746 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44127,1685987653485-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-05 17:54:14,749 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-05 17:54:14,752 DEBUG [Listener at localhost.localdomain/37547] zookeeper.ReadOnlyZKClient(139): Connect 0x480ad75b to 127.0.0.1:53420 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-05 17:54:14,761 DEBUG [Listener at localhost.localdomain/37547] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@86ac88a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-05 17:54:14,764 DEBUG [hconnection-0x3e39930f-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-05 17:54:14,768 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:50232, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-05 17:54:14,770 INFO [Listener at localhost.localdomain/37547] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,44127,1685987653485 2023-06-05 17:54:14,771 INFO [Listener at localhost.localdomain/37547] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:54:14,774 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-05 17:54:14,774 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:54:14,775 INFO [Listener at localhost.localdomain/37547] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-05 17:54:14,787 INFO [Listener at localhost.localdomain/37547] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-06-05 17:54:14,787 INFO [Listener at localhost.localdomain/37547] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:54:14,787 INFO [Listener at localhost.localdomain/37547] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-05 17:54:14,787 INFO [Listener at localhost.localdomain/37547] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-05 17:54:14,788 INFO [Listener at localhost.localdomain/37547] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:54:14,788 INFO [Listener at localhost.localdomain/37547] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-05 17:54:14,788 INFO [Listener at localhost.localdomain/37547] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-05 17:54:14,789 INFO [Listener at localhost.localdomain/37547] ipc.NettyRpcServer(120): Bind to /148.251.75.209:38597 2023-06-05 17:54:14,790 INFO [Listener at localhost.localdomain/37547] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-05 17:54:14,791 DEBUG [Listener at localhost.localdomain/37547] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-05 17:54:14,791 INFO [Listener at localhost.localdomain/37547] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:54:14,792 INFO [Listener at localhost.localdomain/37547] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:54:14,793 INFO [Listener at localhost.localdomain/37547] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38597 connecting to ZooKeeper ensemble=127.0.0.1:53420 2023-06-05 17:54:14,795 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): regionserver:385970x0, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-05 17:54:14,796 DEBUG [Listener at localhost.localdomain/37547] zookeeper.ZKUtil(162): regionserver:385970x0, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-05 17:54:14,797 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38597-0x101bc680f800005 connected 2023-06-05 17:54:14,798 DEBUG [Listener at localhost.localdomain/37547] zookeeper.ZKUtil(162): regionserver:38597-0x101bc680f800005, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-06-05 17:54:14,799 DEBUG [Listener at localhost.localdomain/37547] zookeeper.ZKUtil(164): regionserver:38597-0x101bc680f800005, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-05 17:54:14,799 DEBUG [Listener at localhost.localdomain/37547] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38597 2023-06-05 17:54:14,800 DEBUG [Listener at localhost.localdomain/37547] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38597 2023-06-05 17:54:14,800 DEBUG [Listener at localhost.localdomain/37547] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38597 2023-06-05 17:54:14,801 DEBUG [Listener at localhost.localdomain/37547] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38597 2023-06-05 17:54:14,801 DEBUG [Listener at localhost.localdomain/37547] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38597 2023-06-05 17:54:14,803 INFO [RS:1;jenkins-hbase20:38597] regionserver.HRegionServer(951): ClusterId : 93fa0aee-50db-40f6-8d97-80ebe69dc6e6 2023-06-05 17:54:14,804 DEBUG [RS:1;jenkins-hbase20:38597] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-05 17:54:14,810 DEBUG [RS:1;jenkins-hbase20:38597] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-05 17:54:14,810 DEBUG [RS:1;jenkins-hbase20:38597] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-05 17:54:14,815 DEBUG [RS:1;jenkins-hbase20:38597] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-05 17:54:14,816 DEBUG [RS:1;jenkins-hbase20:38597] zookeeper.ReadOnlyZKClient(139): Connect 0x18fea8e3 to 127.0.0.1:53420 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-05 17:54:14,823 DEBUG [RS:1;jenkins-hbase20:38597] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1639558c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-05 17:54:14,824 DEBUG [RS:1;jenkins-hbase20:38597] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1206c393, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-05 17:54:14,830 DEBUG [RS:1;jenkins-hbase20:38597] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase20:38597 2023-06-05 17:54:14,830 INFO [RS:1;jenkins-hbase20:38597] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-05 17:54:14,831 INFO [RS:1;jenkins-hbase20:38597] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-05 17:54:14,831 DEBUG [RS:1;jenkins-hbase20:38597] regionserver.HRegionServer(1022): About to register with Master. 2023-06-05 17:54:14,831 INFO [RS:1;jenkins-hbase20:38597] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,44127,1685987653485 with isa=jenkins-hbase20.apache.org/148.251.75.209:38597, startcode=1685987654787 2023-06-05 17:54:14,831 DEBUG [RS:1;jenkins-hbase20:38597] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-05 17:54:14,834 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:51015, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-06-05 17:54:14,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44127] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,38597,1685987654787 2023-06-05 17:54:14,835 DEBUG [RS:1;jenkins-hbase20:38597] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96 2023-06-05 17:54:14,835 DEBUG [RS:1;jenkins-hbase20:38597] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:44693 2023-06-05 17:54:14,835 DEBUG [RS:1;jenkins-hbase20:38597] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-05 17:54:14,836 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-05 17:54:14,836 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): regionserver:36201-0x101bc680f800001, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-05 17:54:14,836 DEBUG [RS:1;jenkins-hbase20:38597] zookeeper.ZKUtil(162): regionserver:38597-0x101bc680f800005, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38597,1685987654787 2023-06-05 17:54:14,836 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,38597,1685987654787] 2023-06-05 17:54:14,836 WARN [RS:1;jenkins-hbase20:38597] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-05 17:54:14,837 INFO [RS:1;jenkins-hbase20:38597] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-05 17:54:14,837 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36201-0x101bc680f800001, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38597,1685987654787 2023-06-05 17:54:14,837 DEBUG [RS:1;jenkins-hbase20:38597] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787 2023-06-05 17:54:14,837 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36201-0x101bc680f800001, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36201,1685987653534 2023-06-05 17:54:14,841 DEBUG [RS:1;jenkins-hbase20:38597] zookeeper.ZKUtil(162): regionserver:38597-0x101bc680f800005, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38597,1685987654787 2023-06-05 17:54:14,842 DEBUG [RS:1;jenkins-hbase20:38597] zookeeper.ZKUtil(162): regionserver:38597-0x101bc680f800005, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36201,1685987653534 2023-06-05 17:54:14,843 DEBUG [RS:1;jenkins-hbase20:38597] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-05 17:54:14,844 INFO [RS:1;jenkins-hbase20:38597] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-05 17:54:14,847 INFO [RS:1;jenkins-hbase20:38597] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-05 17:54:14,848 INFO [RS:1;jenkins-hbase20:38597] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-05 17:54:14,849 INFO [RS:1;jenkins-hbase20:38597] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-05 17:54:14,849 INFO [RS:1;jenkins-hbase20:38597] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-05 17:54:14,850 INFO [RS:1;jenkins-hbase20:38597] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-05 17:54:14,850 DEBUG [RS:1;jenkins-hbase20:38597] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:54:14,851 DEBUG [RS:1;jenkins-hbase20:38597] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:54:14,851 DEBUG [RS:1;jenkins-hbase20:38597] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:54:14,851 DEBUG [RS:1;jenkins-hbase20:38597] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:54:14,851 DEBUG [RS:1;jenkins-hbase20:38597] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:54:14,851 DEBUG [RS:1;jenkins-hbase20:38597] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-05 17:54:14,851 DEBUG [RS:1;jenkins-hbase20:38597] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:54:14,851 DEBUG [RS:1;jenkins-hbase20:38597] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:54:14,851 DEBUG [RS:1;jenkins-hbase20:38597] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:54:14,851 DEBUG [RS:1;jenkins-hbase20:38597] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:54:14,854 INFO [RS:1;jenkins-hbase20:38597] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-05 17:54:14,854 INFO [RS:1;jenkins-hbase20:38597] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-05 17:54:14,854 INFO [RS:1;jenkins-hbase20:38597] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-05 17:54:14,867 INFO [RS:1;jenkins-hbase20:38597] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-05 17:54:14,868 INFO [RS:1;jenkins-hbase20:38597] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,38597,1685987654787-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-05 17:54:14,879 INFO [RS:1;jenkins-hbase20:38597] regionserver.Replication(203): jenkins-hbase20.apache.org,38597,1685987654787 started 2023-06-05 17:54:14,879 INFO [RS:1;jenkins-hbase20:38597] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,38597,1685987654787, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:38597, sessionid=0x101bc680f800005 2023-06-05 17:54:14,879 INFO [Listener at localhost.localdomain/37547] hbase.HBaseTestingUtility(3254): Started new server=Thread[RS:1;jenkins-hbase20:38597,5,FailOnTimeoutGroup] 2023-06-05 17:54:14,879 DEBUG [RS:1;jenkins-hbase20:38597] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-05 17:54:14,879 INFO [Listener at localhost.localdomain/37547] wal.TestLogRolling(323): Replication=2 2023-06-05 17:54:14,879 DEBUG [RS:1;jenkins-hbase20:38597] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,38597,1685987654787 2023-06-05 17:54:14,880 DEBUG [RS:1;jenkins-hbase20:38597] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,38597,1685987654787' 2023-06-05 17:54:14,881 DEBUG [RS:1;jenkins-hbase20:38597] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-05 17:54:14,881 DEBUG [RS:1;jenkins-hbase20:38597] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-05 17:54:14,883 DEBUG [Listener at localhost.localdomain/37547] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-05 17:54:14,883 DEBUG [RS:1;jenkins-hbase20:38597] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-05 17:54:14,883 DEBUG [RS:1;jenkins-hbase20:38597] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-05 17:54:14,883 DEBUG [RS:1;jenkins-hbase20:38597] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,38597,1685987654787 2023-06-05 17:54:14,883 DEBUG [RS:1;jenkins-hbase20:38597] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,38597,1685987654787' 2023-06-05 17:54:14,883 DEBUG [RS:1;jenkins-hbase20:38597] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-05 17:54:14,884 DEBUG [RS:1;jenkins-hbase20:38597] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-05 17:54:14,884 DEBUG [RS:1;jenkins-hbase20:38597] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-05 17:54:14,884 INFO [RS:1;jenkins-hbase20:38597] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-05 17:54:14,885 INFO [RS:1;jenkins-hbase20:38597] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-05 17:54:14,886 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:52200, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-05 17:54:14,887 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44127] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-05 17:54:14,888 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44127] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-05 17:54:14,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44127] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-05 17:54:14,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44127] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath 2023-06-05 17:54:14,892 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_PRE_OPERATION 2023-06-05 17:54:14,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44127] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnDatanodeDeath" procId is: 9 2023-06-05 17:54:14,893 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-05 17:54:14,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44127] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-05 17:54:14,896 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10 2023-06-05 17:54:14,896 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10 empty. 2023-06-05 17:54:14,897 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10 2023-06-05 17:54:14,897 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnDatanodeDeath regions 2023-06-05 17:54:14,914 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/.tabledesc/.tableinfo.0000000001 2023-06-05 17:54:14,916 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5ef29a7cc137b39b01715319e4e12c10, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1685987654887.5ef29a7cc137b39b01715319e4e12c10.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/.tmp 2023-06-05 17:54:14,926 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1685987654887.5ef29a7cc137b39b01715319e4e12c10.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:54:14,926 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1604): Closing 5ef29a7cc137b39b01715319e4e12c10, disabling compactions & flushes 2023-06-05 17:54:14,926 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685987654887.5ef29a7cc137b39b01715319e4e12c10. 2023-06-05 17:54:14,926 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685987654887.5ef29a7cc137b39b01715319e4e12c10. 2023-06-05 17:54:14,926 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685987654887.5ef29a7cc137b39b01715319e4e12c10. after waiting 0 ms 2023-06-05 17:54:14,926 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685987654887.5ef29a7cc137b39b01715319e4e12c10. 2023-06-05 17:54:14,926 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685987654887.5ef29a7cc137b39b01715319e4e12c10. 2023-06-05 17:54:14,927 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1558): Region close journal for 5ef29a7cc137b39b01715319e4e12c10: 2023-06-05 17:54:14,930 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ADD_TO_META 2023-06-05 17:54:14,931 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685987654887.5ef29a7cc137b39b01715319e4e12c10.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685987654931"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685987654931"}]},"ts":"1685987654931"} 2023-06-05 17:54:14,933 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-05 17:54:14,935 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-05 17:54:14,935 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685987654935"}]},"ts":"1685987654935"} 2023-06-05 17:54:14,936 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLING in hbase:meta 2023-06-05 17:54:14,944 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-06-05 17:54:14,946 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-06-05 17:54:14,946 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-06-05 17:54:14,946 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-06-05 17:54:14,947 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=5ef29a7cc137b39b01715319e4e12c10, ASSIGN}] 2023-06-05 17:54:14,949 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=5ef29a7cc137b39b01715319e4e12c10, ASSIGN 2023-06-05 17:54:14,950 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=5ef29a7cc137b39b01715319e4e12c10, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,38597,1685987654787; forceNewPlan=false, retain=false 2023-06-05 17:54:14,989 INFO [RS:1;jenkins-hbase20:38597] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C38597%2C1685987654787, suffix=, logDir=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787, archiveDir=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/oldWALs, maxLogs=32 2023-06-05 17:54:15,010 INFO [RS:1;jenkins-hbase20:38597] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987654991 2023-06-05 17:54:15,010 DEBUG [RS:1;jenkins-hbase20:38597] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42883,DS-fd894fcc-241e-4914-9baf-f6c26f4e049d,DISK], DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK]] 2023-06-05 17:54:15,107 INFO [jenkins-hbase20:44127] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-06-05 17:54:15,110 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=5ef29a7cc137b39b01715319e4e12c10, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38597,1685987654787 2023-06-05 17:54:15,110 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685987654887.5ef29a7cc137b39b01715319e4e12c10.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685987655109"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685987655109"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685987655109"}]},"ts":"1685987655109"} 2023-06-05 17:54:15,114 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 5ef29a7cc137b39b01715319e4e12c10, server=jenkins-hbase20.apache.org,38597,1685987654787}] 2023-06-05 17:54:15,268 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,38597,1685987654787 2023-06-05 17:54:15,268 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-05 17:54:15,270 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:54072, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-05 17:54:15,275 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnDatanodeDeath,,1685987654887.5ef29a7cc137b39b01715319e4e12c10. 2023-06-05 17:54:15,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5ef29a7cc137b39b01715319e4e12c10, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1685987654887.5ef29a7cc137b39b01715319e4e12c10.', STARTKEY => '', ENDKEY => ''} 2023-06-05 17:54:15,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnDatanodeDeath 5ef29a7cc137b39b01715319e4e12c10 2023-06-05 17:54:15,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1685987654887.5ef29a7cc137b39b01715319e4e12c10.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:54:15,276 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 5ef29a7cc137b39b01715319e4e12c10 2023-06-05 17:54:15,276 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 5ef29a7cc137b39b01715319e4e12c10 2023-06-05 17:54:15,277 INFO [StoreOpener-5ef29a7cc137b39b01715319e4e12c10-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 5ef29a7cc137b39b01715319e4e12c10 2023-06-05 17:54:15,279 DEBUG [StoreOpener-5ef29a7cc137b39b01715319e4e12c10-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10/info 2023-06-05 17:54:15,279 DEBUG [StoreOpener-5ef29a7cc137b39b01715319e4e12c10-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10/info 2023-06-05 17:54:15,279 INFO [StoreOpener-5ef29a7cc137b39b01715319e4e12c10-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5ef29a7cc137b39b01715319e4e12c10 columnFamilyName info 2023-06-05 17:54:15,280 INFO [StoreOpener-5ef29a7cc137b39b01715319e4e12c10-1] regionserver.HStore(310): Store=5ef29a7cc137b39b01715319e4e12c10/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:54:15,282 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10 2023-06-05 17:54:15,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10 2023-06-05 17:54:15,288 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 5ef29a7cc137b39b01715319e4e12c10 2023-06-05 17:54:15,291 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-05 17:54:15,291 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 5ef29a7cc137b39b01715319e4e12c10; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=882929, jitterRate=0.12270325422286987}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-05 17:54:15,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 5ef29a7cc137b39b01715319e4e12c10: 2023-06-05 17:54:15,294 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnDatanodeDeath,,1685987654887.5ef29a7cc137b39b01715319e4e12c10., pid=11, masterSystemTime=1685987655268 2023-06-05 17:54:15,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnDatanodeDeath,,1685987654887.5ef29a7cc137b39b01715319e4e12c10. 2023-06-05 17:54:15,299 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnDatanodeDeath,,1685987654887.5ef29a7cc137b39b01715319e4e12c10. 2023-06-05 17:54:15,300 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=5ef29a7cc137b39b01715319e4e12c10, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,38597,1685987654787 2023-06-05 17:54:15,300 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685987654887.5ef29a7cc137b39b01715319e4e12c10.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685987655299"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685987655299"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685987655299"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685987655299"}]},"ts":"1685987655299"} 2023-06-05 17:54:15,306 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-05 17:54:15,307 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 5ef29a7cc137b39b01715319e4e12c10, server=jenkins-hbase20.apache.org,38597,1685987654787 in 189 msec 2023-06-05 17:54:15,310 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-05 17:54:15,310 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=5ef29a7cc137b39b01715319e4e12c10, ASSIGN in 359 msec 2023-06-05 17:54:15,311 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-05 17:54:15,311 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685987655311"}]},"ts":"1685987655311"} 2023-06-05 17:54:15,313 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLED in hbase:meta 2023-06-05 17:54:15,316 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_POST_OPERATION 2023-06-05 17:54:15,318 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath in 428 msec 2023-06-05 17:54:17,723 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-05 17:54:19,798 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-05 17:54:19,799 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-05 17:54:20,844 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnDatanodeDeath' 2023-06-05 17:54:24,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44127] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-05 17:54:24,897 INFO [Listener at localhost.localdomain/37547] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnDatanodeDeath, procId: 9 completed 2023-06-05 17:54:24,903 DEBUG [Listener at localhost.localdomain/37547] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnDatanodeDeath 2023-06-05 17:54:24,903 DEBUG [Listener at localhost.localdomain/37547] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnDatanodeDeath,,1685987654887.5ef29a7cc137b39b01715319e4e12c10. 2023-06-05 17:54:24,919 WARN [Listener at localhost.localdomain/37547] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-05 17:54:24,921 WARN [Listener at localhost.localdomain/37547] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:54:24,922 INFO [Listener at localhost.localdomain/37547] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:54:24,929 INFO [Listener at localhost.localdomain/37547] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/java.io.tmpdir/Jetty_localhost_36815_datanode____.z4mydc/webapp 2023-06-05 17:54:25,011 INFO [Listener at localhost.localdomain/37547] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36815 2023-06-05 17:54:25,025 WARN [Listener at localhost.localdomain/35433] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:54:25,046 WARN [Listener at localhost.localdomain/35433] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-05 17:54:25,049 WARN [Listener at localhost.localdomain/35433] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:54:25,050 INFO [Listener at localhost.localdomain/35433] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:54:25,057 INFO [Listener at localhost.localdomain/35433] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/java.io.tmpdir/Jetty_localhost_45961_datanode____.1rn1mu/webapp 2023-06-05 17:54:25,156 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc2ba707a4f27c57: Processing first storage report for DS-0c111fa7-8751-49bb-8765-c45cfe06bbff from datanode 44e058c4-c926-4ba2-9732-bdcf541a87d2 2023-06-05 17:54:25,156 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc2ba707a4f27c57: from storage DS-0c111fa7-8751-49bb-8765-c45cfe06bbff node DatanodeRegistration(127.0.0.1:45297, datanodeUuid=44e058c4-c926-4ba2-9732-bdcf541a87d2, infoPort=45417, infoSecurePort=0, ipcPort=35433, storageInfo=lv=-57;cid=testClusterID;nsid=1790368170;c=1685987652982), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:54:25,156 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc2ba707a4f27c57: Processing first storage report for DS-1b8d88bc-131a-485f-8e65-1c98a7d3a1ca from datanode 44e058c4-c926-4ba2-9732-bdcf541a87d2 2023-06-05 17:54:25,156 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc2ba707a4f27c57: from storage DS-1b8d88bc-131a-485f-8e65-1c98a7d3a1ca node DatanodeRegistration(127.0.0.1:45297, datanodeUuid=44e058c4-c926-4ba2-9732-bdcf541a87d2, infoPort=45417, infoSecurePort=0, ipcPort=35433, storageInfo=lv=-57;cid=testClusterID;nsid=1790368170;c=1685987652982), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:54:25,160 INFO [Listener at localhost.localdomain/35433] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45961 2023-06-05 17:54:25,171 WARN [Listener at localhost.localdomain/44299] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:54:25,190 WARN [Listener at localhost.localdomain/44299] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-05 17:54:25,193 WARN [Listener at localhost.localdomain/44299] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:54:25,194 INFO [Listener at localhost.localdomain/44299] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:54:25,208 INFO [Listener at localhost.localdomain/44299] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/java.io.tmpdir/Jetty_localhost_44199_datanode____rfudbs/webapp 2023-06-05 17:54:25,283 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x780d4a98be77860f: Processing first storage report for DS-626d1d6e-0ca1-4238-a441-4dba74ad675a from datanode 5eac5c40-9ec0-417d-8823-6696584fe79b 2023-06-05 17:54:25,283 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x780d4a98be77860f: from storage DS-626d1d6e-0ca1-4238-a441-4dba74ad675a node DatanodeRegistration(127.0.0.1:34911, datanodeUuid=5eac5c40-9ec0-417d-8823-6696584fe79b, infoPort=38745, infoSecurePort=0, ipcPort=44299, storageInfo=lv=-57;cid=testClusterID;nsid=1790368170;c=1685987652982), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:54:25,283 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x780d4a98be77860f: Processing first storage report for DS-2467c763-f919-4f31-ab96-2521cbc5cafe from datanode 5eac5c40-9ec0-417d-8823-6696584fe79b 2023-06-05 17:54:25,283 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x780d4a98be77860f: from storage DS-2467c763-f919-4f31-ab96-2521cbc5cafe node DatanodeRegistration(127.0.0.1:34911, datanodeUuid=5eac5c40-9ec0-417d-8823-6696584fe79b, infoPort=38745, infoSecurePort=0, ipcPort=44299, storageInfo=lv=-57;cid=testClusterID;nsid=1790368170;c=1685987652982), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:54:25,312 INFO [Listener at localhost.localdomain/44299] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44199 2023-06-05 17:54:25,376 WARN [Listener at localhost.localdomain/43289] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:54:25,469 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x78bd398a83631071: Processing first storage report for DS-60a1262f-b0eb-47db-be64-b71b104f8cef from datanode 0438e41a-3fb0-43af-81e9-0719de499968 2023-06-05 17:54:25,469 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x78bd398a83631071: from storage DS-60a1262f-b0eb-47db-be64-b71b104f8cef node DatanodeRegistration(127.0.0.1:39575, datanodeUuid=0438e41a-3fb0-43af-81e9-0719de499968, infoPort=40935, infoSecurePort=0, ipcPort=43289, storageInfo=lv=-57;cid=testClusterID;nsid=1790368170;c=1685987652982), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:54:25,469 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x78bd398a83631071: Processing first storage report for DS-55c269a4-11bb-4053-b861-f511280f4a01 from datanode 0438e41a-3fb0-43af-81e9-0719de499968 2023-06-05 17:54:25,469 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x78bd398a83631071: from storage DS-55c269a4-11bb-4053-b861-f511280f4a01 node DatanodeRegistration(127.0.0.1:39575, datanodeUuid=0438e41a-3fb0-43af-81e9-0719de499968, infoPort=40935, infoSecurePort=0, ipcPort=43289, storageInfo=lv=-57;cid=testClusterID;nsid=1790368170;c=1685987652982), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:54:25,488 WARN [Listener at localhost.localdomain/43289] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-05 17:54:25,490 WARN [ResponseProcessor for block BP-1666227208-148.251.75.209-1685987652982:blk_1073741838_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1666227208-148.251.75.209-1685987652982:blk_1073741838_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-05 17:54:25,492 WARN [ResponseProcessor for block BP-1666227208-148.251.75.209-1685987652982:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1666227208-148.251.75.209-1685987652982:blk_1073741829_1005 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-05 17:54:25,491 WARN [ResponseProcessor for block BP-1666227208-148.251.75.209-1685987652982:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1666227208-148.251.75.209-1685987652982:blk_1073741832_1008 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-05 17:54:25,493 WARN [DataStreamer for file /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987654991 block BP-1666227208-148.251.75.209-1685987652982:blk_1073741838_1014] hdfs.DataStreamer(1548): Error Recovery for BP-1666227208-148.251.75.209-1685987652982:blk_1073741838_1014 in pipeline [DatanodeInfoWithStorage[127.0.0.1:42883,DS-fd894fcc-241e-4914-9baf-f6c26f4e049d,DISK], DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:42883,DS-fd894fcc-241e-4914-9baf-f6c26f4e049d,DISK]) is bad. 2023-06-05 17:54:25,493 WARN [DataStreamer for file /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/MasterData/WALs/jenkins-hbase20.apache.org,44127,1685987653485/jenkins-hbase20.apache.org%2C44127%2C1685987653485.1685987653646 block BP-1666227208-148.251.75.209-1685987652982:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-1666227208-148.251.75.209-1685987652982:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:42883,DS-fd894fcc-241e-4914-9baf-f6c26f4e049d,DISK], DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:42883,DS-fd894fcc-241e-4914-9baf-f6c26f4e049d,DISK]) is bad. 2023-06-05 17:54:25,493 WARN [DataStreamer for file /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,36201,1685987653534/jenkins-hbase20.apache.org%2C36201%2C1685987653534.1685987653939 block BP-1666227208-148.251.75.209-1685987652982:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-1666227208-148.251.75.209-1685987652982:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:42883,DS-fd894fcc-241e-4914-9baf-f6c26f4e049d,DISK], DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:42883,DS-fd894fcc-241e-4914-9baf-f6c26f4e049d,DISK]) is bad. 2023-06-05 17:54:25,496 WARN [ResponseProcessor for block BP-1666227208-148.251.75.209-1685987652982:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1666227208-148.251.75.209-1685987652982:blk_1073741833_1009 java.io.IOException: Bad response ERROR for BP-1666227208-148.251.75.209-1685987652982:blk_1073741833_1009 from datanode DatanodeInfoWithStorage[127.0.0.1:42883,DS-fd894fcc-241e-4914-9baf-f6c26f4e049d,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-05 17:54:25,496 WARN [DataStreamer for file /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,36201,1685987653534/jenkins-hbase20.apache.org%2C36201%2C1685987653534.meta.1685987654114.meta block BP-1666227208-148.251.75.209-1685987652982:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-1666227208-148.251.75.209-1685987652982:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK], DatanodeInfoWithStorage[127.0.0.1:42883,DS-fd894fcc-241e-4914-9baf-f6c26f4e049d,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:42883,DS-fd894fcc-241e-4914-9baf-f6c26f4e049d,DISK]) is bad. 2023-06-05 17:54:25,496 WARN [PacketResponder: BP-1666227208-148.251.75.209-1685987652982:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:42883]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:25,505 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-393471360_17 at /127.0.0.1:40064 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:40493:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:40064 dst: /127.0.0.1:40493 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:25,512 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1499708064_17 at /127.0.0.1:40014 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:40493:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:40014 dst: /127.0.0.1:40493 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:40493 remote=/127.0.0.1:40014]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:25,513 WARN [PacketResponder: BP-1666227208-148.251.75.209-1685987652982:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:40493]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:25,513 INFO [Listener at localhost.localdomain/43289] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-05 17:54:25,514 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-734844736_17 at /127.0.0.1:40104 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:40493:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:40104 dst: /127.0.0.1:40493 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:40493 remote=/127.0.0.1:40104]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:25,517 WARN [PacketResponder: BP-1666227208-148.251.75.209-1685987652982:blk_1073741838_1014, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:40493]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:25,518 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1499708064_17 at /127.0.0.1:42134 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:42883:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42134 dst: /127.0.0.1:42883 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:25,520 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-734844736_17 at /127.0.0.1:42212 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:42883:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42212 dst: /127.0.0.1:42883 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:25,520 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-393471360_17 at /127.0.0.1:40050 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:40493:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:40050 dst: /127.0.0.1:40493 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:40493 remote=/127.0.0.1:40050]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:25,521 WARN [PacketResponder: BP-1666227208-148.251.75.209-1685987652982:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:40493]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:25,522 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-393471360_17 at /127.0.0.1:42144 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:42883:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42144 dst: /127.0.0.1:42883 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:25,620 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-393471360_17 at /127.0.0.1:42160 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:42883:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42160 dst: /127.0.0.1:42883 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:25,621 WARN [BP-1666227208-148.251.75.209-1685987652982 heartbeating to localhost.localdomain/127.0.0.1:44693] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-05 17:54:25,622 WARN [BP-1666227208-148.251.75.209-1685987652982 heartbeating to localhost.localdomain/127.0.0.1:44693] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1666227208-148.251.75.209-1685987652982 (Datanode Uuid 26e04d54-7a51-4f38-b10a-c6db5ceed5e6) service to localhost.localdomain/127.0.0.1:44693 2023-06-05 17:54:25,622 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data3/current/BP-1666227208-148.251.75.209-1685987652982] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:54:25,623 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data4/current/BP-1666227208-148.251.75.209-1685987652982] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:54:25,625 WARN [Listener at localhost.localdomain/43289] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-05 17:54:25,625 WARN [ResponseProcessor for block BP-1666227208-148.251.75.209-1685987652982:blk_1073741829_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1666227208-148.251.75.209-1685987652982:blk_1073741829_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-05 17:54:25,626 WARN [ResponseProcessor for block BP-1666227208-148.251.75.209-1685987652982:blk_1073741832_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1666227208-148.251.75.209-1685987652982:blk_1073741832_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-05 17:54:25,626 WARN [ResponseProcessor for block BP-1666227208-148.251.75.209-1685987652982:blk_1073741838_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1666227208-148.251.75.209-1685987652982:blk_1073741838_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-05 17:54:25,625 WARN [ResponseProcessor for block BP-1666227208-148.251.75.209-1685987652982:blk_1073741833_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1666227208-148.251.75.209-1685987652982:blk_1073741833_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-05 17:54:25,629 INFO [Listener at localhost.localdomain/43289] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-05 17:54:25,738 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-393471360_17 at /127.0.0.1:50790 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:40493:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50790 dst: /127.0.0.1:40493 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:25,739 WARN [BP-1666227208-148.251.75.209-1685987652982 heartbeating to localhost.localdomain/127.0.0.1:44693] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-05 17:54:25,738 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1499708064_17 at /127.0.0.1:50764 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:40493:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50764 dst: /127.0.0.1:40493 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:25,738 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-393471360_17 at /127.0.0.1:50792 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:40493:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50792 dst: /127.0.0.1:40493 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:25,738 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-734844736_17 at /127.0.0.1:50778 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:40493:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50778 dst: /127.0.0.1:40493 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:25,739 WARN [BP-1666227208-148.251.75.209-1685987652982 heartbeating to localhost.localdomain/127.0.0.1:44693] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1666227208-148.251.75.209-1685987652982 (Datanode Uuid e310359b-f2bb-4840-aadc-4dc77d3a603f) service to localhost.localdomain/127.0.0.1:44693 2023-06-05 17:54:25,742 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data1/current/BP-1666227208-148.251.75.209-1685987652982] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:54:25,743 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data2/current/BP-1666227208-148.251.75.209-1685987652982] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:54:25,748 DEBUG [Listener at localhost.localdomain/43289] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-05 17:54:25,751 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:39814, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-05 17:54:25,753 WARN [RS:1;jenkins-hbase20:38597.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=4, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:54:25,754 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C38597%2C1685987654787:(num 1685987654991) roll requested 2023-06-05 17:54:25,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38597] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:54:25,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38597] ipc.CallRunner(144): callId: 9 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:39814 deadline: 1685987675752, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-06-05 17:54:25,762 WARN [Thread-629] hdfs.DataStreamer(1658): Abandoning BP-1666227208-148.251.75.209-1685987652982:blk_1073741839_1019 2023-06-05 17:54:25,766 WARN [Thread-629] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK] 2023-06-05 17:54:25,778 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-06-05 17:54:25,778 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987654991 with entries=1, filesize=467 B; new WAL /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987665754 2023-06-05 17:54:25,778 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39575,DS-60a1262f-b0eb-47db-be64-b71b104f8cef,DISK], DatanodeInfoWithStorage[127.0.0.1:45297,DS-0c111fa7-8751-49bb-8765-c45cfe06bbff,DISK]] 2023-06-05 17:54:25,778 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:54:25,778 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987654991 is not closed yet, will try archiving it next time 2023-06-05 17:54:25,779 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987654991; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:54:25,780 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987654991 to hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/oldWALs/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987654991 2023-06-05 17:54:37,812 INFO [Listener at localhost.localdomain/43289] wal.TestLogRolling(375): log.getCurrentFileName(): hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987665754 2023-06-05 17:54:37,813 WARN [Listener at localhost.localdomain/43289] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-05 17:54:37,815 WARN [ResponseProcessor for block BP-1666227208-148.251.75.209-1685987652982:blk_1073741840_1020] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1666227208-148.251.75.209-1685987652982:blk_1073741840_1020 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-05 17:54:37,815 WARN [DataStreamer for file /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987665754 block BP-1666227208-148.251.75.209-1685987652982:blk_1073741840_1020] hdfs.DataStreamer(1548): Error Recovery for BP-1666227208-148.251.75.209-1685987652982:blk_1073741840_1020 in pipeline [DatanodeInfoWithStorage[127.0.0.1:39575,DS-60a1262f-b0eb-47db-be64-b71b104f8cef,DISK], DatanodeInfoWithStorage[127.0.0.1:45297,DS-0c111fa7-8751-49bb-8765-c45cfe06bbff,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:39575,DS-60a1262f-b0eb-47db-be64-b71b104f8cef,DISK]) is bad. 2023-06-05 17:54:37,819 INFO [Listener at localhost.localdomain/43289] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-05 17:54:37,822 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-734844736_17 at /127.0.0.1:50968 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741840_1020]] datanode.DataXceiver(323): 127.0.0.1:45297:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50968 dst: /127.0.0.1:45297 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:45297 remote=/127.0.0.1:50968]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:37,822 WARN [PacketResponder: BP-1666227208-148.251.75.209-1685987652982:blk_1073741840_1020, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:45297]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:37,823 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-734844736_17 at /127.0.0.1:34544 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741840_1020]] datanode.DataXceiver(323): 127.0.0.1:39575:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34544 dst: /127.0.0.1:39575 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:37,926 WARN [BP-1666227208-148.251.75.209-1685987652982 heartbeating to localhost.localdomain/127.0.0.1:44693] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-05 17:54:37,926 WARN [BP-1666227208-148.251.75.209-1685987652982 heartbeating to localhost.localdomain/127.0.0.1:44693] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1666227208-148.251.75.209-1685987652982 (Datanode Uuid 0438e41a-3fb0-43af-81e9-0719de499968) service to localhost.localdomain/127.0.0.1:44693 2023-06-05 17:54:37,928 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data9/current/BP-1666227208-148.251.75.209-1685987652982] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:54:37,928 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data10/current/BP-1666227208-148.251.75.209-1685987652982] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:54:37,935 WARN [sync.3] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:45297,DS-0c111fa7-8751-49bb-8765-c45cfe06bbff,DISK]] 2023-06-05 17:54:37,935 WARN [sync.3] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:45297,DS-0c111fa7-8751-49bb-8765-c45cfe06bbff,DISK]] 2023-06-05 17:54:37,935 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C38597%2C1685987654787:(num 1685987665754) roll requested 2023-06-05 17:54:37,947 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987665754 with entries=2, filesize=2.36 KB; new WAL /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987677935 2023-06-05 17:54:37,947 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34911,DS-626d1d6e-0ca1-4238-a441-4dba74ad675a,DISK], DatanodeInfoWithStorage[127.0.0.1:45297,DS-0c111fa7-8751-49bb-8765-c45cfe06bbff,DISK]] 2023-06-05 17:54:37,947 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987665754 is not closed yet, will try archiving it next time 2023-06-05 17:54:40,173 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@c207c3d] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:45297, datanodeUuid=44e058c4-c926-4ba2-9732-bdcf541a87d2, infoPort=45417, infoSecurePort=0, ipcPort=35433, storageInfo=lv=-57;cid=testClusterID;nsid=1790368170;c=1685987652982):Failed to transfer BP-1666227208-148.251.75.209-1685987652982:blk_1073741840_1021 to 127.0.0.1:40493 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:41,941 WARN [Listener at localhost.localdomain/43289] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-05 17:54:41,945 WARN [ResponseProcessor for block BP-1666227208-148.251.75.209-1685987652982:blk_1073741841_1022] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1666227208-148.251.75.209-1685987652982:blk_1073741841_1022 java.io.IOException: Bad response ERROR for BP-1666227208-148.251.75.209-1685987652982:blk_1073741841_1022 from datanode DatanodeInfoWithStorage[127.0.0.1:45297,DS-0c111fa7-8751-49bb-8765-c45cfe06bbff,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-05 17:54:41,945 WARN [DataStreamer for file /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987677935 block BP-1666227208-148.251.75.209-1685987652982:blk_1073741841_1022] hdfs.DataStreamer(1548): Error Recovery for BP-1666227208-148.251.75.209-1685987652982:blk_1073741841_1022 in pipeline [DatanodeInfoWithStorage[127.0.0.1:34911,DS-626d1d6e-0ca1-4238-a441-4dba74ad675a,DISK], DatanodeInfoWithStorage[127.0.0.1:45297,DS-0c111fa7-8751-49bb-8765-c45cfe06bbff,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:45297,DS-0c111fa7-8751-49bb-8765-c45cfe06bbff,DISK]) is bad. 2023-06-05 17:54:41,945 WARN [PacketResponder: BP-1666227208-148.251.75.209-1685987652982:blk_1073741841_1022, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:45297]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:41,946 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-734844736_17 at /127.0.0.1:53598 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741841_1022]] datanode.DataXceiver(323): 127.0.0.1:34911:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:53598 dst: /127.0.0.1:34911 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:41,952 INFO [Listener at localhost.localdomain/43289] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-05 17:54:42,057 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-734844736_17 at /127.0.0.1:46938 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741841_1022]] datanode.DataXceiver(323): 127.0.0.1:45297:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46938 dst: /127.0.0.1:45297 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:42,059 WARN [BP-1666227208-148.251.75.209-1685987652982 heartbeating to localhost.localdomain/127.0.0.1:44693] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-05 17:54:42,059 WARN [BP-1666227208-148.251.75.209-1685987652982 heartbeating to localhost.localdomain/127.0.0.1:44693] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1666227208-148.251.75.209-1685987652982 (Datanode Uuid 44e058c4-c926-4ba2-9732-bdcf541a87d2) service to localhost.localdomain/127.0.0.1:44693 2023-06-05 17:54:42,059 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data5/current/BP-1666227208-148.251.75.209-1685987652982] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:54:42,059 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data6/current/BP-1666227208-148.251.75.209-1685987652982] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:54:42,064 WARN [sync.1] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34911,DS-626d1d6e-0ca1-4238-a441-4dba74ad675a,DISK]] 2023-06-05 17:54:42,064 WARN [sync.1] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34911,DS-626d1d6e-0ca1-4238-a441-4dba74ad675a,DISK]] 2023-06-05 17:54:42,064 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C38597%2C1685987654787:(num 1685987677935) roll requested 2023-06-05 17:54:42,067 WARN [Thread-649] hdfs.DataStreamer(1658): Abandoning BP-1666227208-148.251.75.209-1685987652982:blk_1073741842_1024 2023-06-05 17:54:42,068 WARN [Thread-649] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39575,DS-60a1262f-b0eb-47db-be64-b71b104f8cef,DISK] 2023-06-05 17:54:42,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38597] regionserver.HRegion(9158): Flush requested on 5ef29a7cc137b39b01715319e4e12c10 2023-06-05 17:54:42,071 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5ef29a7cc137b39b01715319e4e12c10 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-05 17:54:42,073 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-734844736_17 at /127.0.0.1:42504 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741843_1025]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data8/current]'}, localName='127.0.0.1:34911', datanodeUuid='5eac5c40-9ec0-417d-8823-6696584fe79b', xmitsInProgress=0}:Exception transfering block BP-1666227208-148.251.75.209-1685987652982:blk_1073741843_1025 to mirror 127.0.0.1:42883: java.net.ConnectException: Connection refused 2023-06-05 17:54:42,073 WARN [Thread-649] hdfs.DataStreamer(1658): Abandoning BP-1666227208-148.251.75.209-1685987652982:blk_1073741843_1025 2023-06-05 17:54:42,073 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-734844736_17 at /127.0.0.1:42504 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741843_1025]] datanode.DataXceiver(323): 127.0.0.1:34911:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42504 dst: /127.0.0.1:34911 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:42,074 WARN [Thread-649] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42883,DS-fd894fcc-241e-4914-9baf-f6c26f4e049d,DISK] 2023-06-05 17:54:42,077 WARN [Thread-649] hdfs.DataStreamer(1658): Abandoning BP-1666227208-148.251.75.209-1685987652982:blk_1073741844_1026 2023-06-05 17:54:42,078 WARN [Thread-649] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK] 2023-06-05 17:54:42,079 WARN [Thread-651] hdfs.DataStreamer(1658): Abandoning BP-1666227208-148.251.75.209-1685987652982:blk_1073741845_1027 2023-06-05 17:54:42,080 WARN [Thread-651] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45297,DS-0c111fa7-8751-49bb-8765-c45cfe06bbff,DISK] 2023-06-05 17:54:42,082 WARN [Thread-651] hdfs.DataStreamer(1658): Abandoning BP-1666227208-148.251.75.209-1685987652982:blk_1073741847_1029 2023-06-05 17:54:42,082 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-734844736_17 at /127.0.0.1:42518 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741846_1028]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data8/current]'}, localName='127.0.0.1:34911', datanodeUuid='5eac5c40-9ec0-417d-8823-6696584fe79b', xmitsInProgress=0}:Exception transfering block BP-1666227208-148.251.75.209-1685987652982:blk_1073741846_1028 to mirror 127.0.0.1:45297: java.net.ConnectException: Connection refused 2023-06-05 17:54:42,082 WARN [Thread-649] hdfs.DataStreamer(1658): Abandoning BP-1666227208-148.251.75.209-1685987652982:blk_1073741846_1028 2023-06-05 17:54:42,082 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-734844736_17 at /127.0.0.1:42518 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741846_1028]] datanode.DataXceiver(323): 127.0.0.1:34911:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42518 dst: /127.0.0.1:34911 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:42,083 WARN [Thread-651] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39575,DS-60a1262f-b0eb-47db-be64-b71b104f8cef,DISK] 2023-06-05 17:54:42,083 WARN [Thread-649] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45297,DS-0c111fa7-8751-49bb-8765-c45cfe06bbff,DISK] 2023-06-05 17:54:42,084 WARN [IPC Server handler 4 on default port 44693] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-06-05 17:54:42,084 WARN [IPC Server handler 4 on default port 44693] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-06-05 17:54:42,084 WARN [IPC Server handler 4 on default port 44693] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-06-05 17:54:42,084 WARN [Thread-651] hdfs.DataStreamer(1658): Abandoning BP-1666227208-148.251.75.209-1685987652982:blk_1073741848_1030 2023-06-05 17:54:42,085 WARN [Thread-651] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42883,DS-fd894fcc-241e-4914-9baf-f6c26f4e049d,DISK] 2023-06-05 17:54:42,087 WARN [Thread-651] hdfs.DataStreamer(1658): Abandoning BP-1666227208-148.251.75.209-1685987652982:blk_1073741850_1032 2023-06-05 17:54:42,087 WARN [Thread-651] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK] 2023-06-05 17:54:42,088 WARN [IPC Server handler 1 on default port 44693] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-06-05 17:54:42,088 WARN [IPC Server handler 1 on default port 44693] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-06-05 17:54:42,088 WARN [IPC Server handler 1 on default port 44693] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-06-05 17:54:42,091 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987677935 with entries=13, filesize=14.09 KB; new WAL /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987682064 2023-06-05 17:54:42,091 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34911,DS-626d1d6e-0ca1-4238-a441-4dba74ad675a,DISK]] 2023-06-05 17:54:42,091 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987677935 is not closed yet, will try archiving it next time 2023-06-05 17:54:42,097 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=12 (bloomFilter=true), to=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10/.tmp/info/f93450d71f6b49bfa6cc7667606e6469 2023-06-05 17:54:42,105 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10/.tmp/info/f93450d71f6b49bfa6cc7667606e6469 as hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10/info/f93450d71f6b49bfa6cc7667606e6469 2023-06-05 17:54:42,111 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10/info/f93450d71f6b49bfa6cc7667606e6469, entries=5, sequenceid=12, filesize=10.0 K 2023-06-05 17:54:42,113 WARN [sync.4] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34911,DS-626d1d6e-0ca1-4238-a441-4dba74ad675a,DISK]] 2023-06-05 17:54:42,113 WARN [sync.4] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34911,DS-626d1d6e-0ca1-4238-a441-4dba74ad675a,DISK]] 2023-06-05 17:54:42,113 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=7.35 KB/7531 for 5ef29a7cc137b39b01715319e4e12c10 in 42ms, sequenceid=12, compaction requested=false 2023-06-05 17:54:42,113 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C38597%2C1685987654787:(num 1685987682064) roll requested 2023-06-05 17:54:42,114 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5ef29a7cc137b39b01715319e4e12c10: 2023-06-05 17:54:42,119 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-734844736_17 at /127.0.0.1:42574 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741852_1034]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data8/current]'}, localName='127.0.0.1:34911', datanodeUuid='5eac5c40-9ec0-417d-8823-6696584fe79b', xmitsInProgress=0}:Exception transfering block BP-1666227208-148.251.75.209-1685987652982:blk_1073741852_1034 to mirror 127.0.0.1:42883: java.net.ConnectException: Connection refused 2023-06-05 17:54:42,119 WARN [Thread-660] hdfs.DataStreamer(1658): Abandoning BP-1666227208-148.251.75.209-1685987652982:blk_1073741852_1034 2023-06-05 17:54:42,119 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-734844736_17 at /127.0.0.1:42574 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741852_1034]] datanode.DataXceiver(323): 127.0.0.1:34911:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42574 dst: /127.0.0.1:34911 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:42,120 WARN [Thread-660] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42883,DS-fd894fcc-241e-4914-9baf-f6c26f4e049d,DISK] 2023-06-05 17:54:42,123 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-734844736_17 at /127.0.0.1:42588 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741853_1035]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data8/current]'}, localName='127.0.0.1:34911', datanodeUuid='5eac5c40-9ec0-417d-8823-6696584fe79b', xmitsInProgress=0}:Exception transfering block BP-1666227208-148.251.75.209-1685987652982:blk_1073741853_1035 to mirror 127.0.0.1:45297: java.net.ConnectException: Connection refused 2023-06-05 17:54:42,123 WARN [Thread-660] hdfs.DataStreamer(1658): Abandoning BP-1666227208-148.251.75.209-1685987652982:blk_1073741853_1035 2023-06-05 17:54:42,123 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-734844736_17 at /127.0.0.1:42588 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741853_1035]] datanode.DataXceiver(323): 127.0.0.1:34911:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42588 dst: /127.0.0.1:34911 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:42,123 WARN [Thread-660] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45297,DS-0c111fa7-8751-49bb-8765-c45cfe06bbff,DISK] 2023-06-05 17:54:42,125 WARN [Thread-660] hdfs.DataStreamer(1658): Abandoning BP-1666227208-148.251.75.209-1685987652982:blk_1073741854_1036 2023-06-05 17:54:42,125 WARN [Thread-660] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39575,DS-60a1262f-b0eb-47db-be64-b71b104f8cef,DISK] 2023-06-05 17:54:42,126 WARN [Thread-660] hdfs.DataStreamer(1658): Abandoning BP-1666227208-148.251.75.209-1685987652982:blk_1073741855_1037 2023-06-05 17:54:42,126 WARN [Thread-660] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK] 2023-06-05 17:54:42,127 WARN [IPC Server handler 0 on default port 44693] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-06-05 17:54:42,127 WARN [IPC Server handler 0 on default port 44693] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-06-05 17:54:42,127 WARN [IPC Server handler 0 on default port 44693] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-06-05 17:54:42,131 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987682064 with entries=1, filesize=440 B; new WAL /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987682114 2023-06-05 17:54:42,131 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34911,DS-626d1d6e-0ca1-4238-a441-4dba74ad675a,DISK]] 2023-06-05 17:54:42,131 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987677935 is not closed yet, will try archiving it next time 2023-06-05 17:54:42,131 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987682064 is not closed yet, will try archiving it next time 2023-06-05 17:54:42,132 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987665754 to hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/oldWALs/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987665754 2023-06-05 17:54:42,290 WARN [sync.1] wal.FSHLog(757): Too many consecutive RollWriter requests, it's a sign of the total number of live datanodes is lower than the tolerable replicas. 2023-06-05 17:54:42,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38597] regionserver.HRegion(9158): Flush requested on 5ef29a7cc137b39b01715319e4e12c10 2023-06-05 17:54:42,290 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5ef29a7cc137b39b01715319e4e12c10 1/1 column families, dataSize=8.40 KB heapSize=9.25 KB 2023-06-05 17:54:42,295 WARN [Thread-666] hdfs.DataStreamer(1658): Abandoning BP-1666227208-148.251.75.209-1685987652982:blk_1073741857_1039 2023-06-05 17:54:42,296 WARN [Thread-666] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39575,DS-60a1262f-b0eb-47db-be64-b71b104f8cef,DISK] 2023-06-05 17:54:42,297 WARN [Thread-666] hdfs.DataStreamer(1658): Abandoning BP-1666227208-148.251.75.209-1685987652982:blk_1073741858_1040 2023-06-05 17:54:42,298 WARN [Thread-666] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45297,DS-0c111fa7-8751-49bb-8765-c45cfe06bbff,DISK] 2023-06-05 17:54:42,300 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-734844736_17 at /127.0.0.1:42600 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741859_1041]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data8/current]'}, localName='127.0.0.1:34911', datanodeUuid='5eac5c40-9ec0-417d-8823-6696584fe79b', xmitsInProgress=0}:Exception transfering block BP-1666227208-148.251.75.209-1685987652982:blk_1073741859_1041 to mirror 127.0.0.1:40493: java.net.ConnectException: Connection refused 2023-06-05 17:54:42,300 WARN [Thread-666] hdfs.DataStreamer(1658): Abandoning BP-1666227208-148.251.75.209-1685987652982:blk_1073741859_1041 2023-06-05 17:54:42,300 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-734844736_17 at /127.0.0.1:42600 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741859_1041]] datanode.DataXceiver(323): 127.0.0.1:34911:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42600 dst: /127.0.0.1:34911 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:42,300 WARN [Thread-666] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK] 2023-06-05 17:54:42,302 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-734844736_17 at /127.0.0.1:42602 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741860_1042]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data8/current]'}, localName='127.0.0.1:34911', datanodeUuid='5eac5c40-9ec0-417d-8823-6696584fe79b', xmitsInProgress=0}:Exception transfering block BP-1666227208-148.251.75.209-1685987652982:blk_1073741860_1042 to mirror 127.0.0.1:42883: java.net.ConnectException: Connection refused 2023-06-05 17:54:42,302 WARN [Thread-666] hdfs.DataStreamer(1658): Abandoning BP-1666227208-148.251.75.209-1685987652982:blk_1073741860_1042 2023-06-05 17:54:42,303 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-734844736_17 at /127.0.0.1:42602 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741860_1042]] datanode.DataXceiver(323): 127.0.0.1:34911:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42602 dst: /127.0.0.1:34911 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:42,303 WARN [Thread-666] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42883,DS-fd894fcc-241e-4914-9baf-f6c26f4e049d,DISK] 2023-06-05 17:54:42,304 WARN [IPC Server handler 1 on default port 44693] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-06-05 17:54:42,304 WARN [IPC Server handler 1 on default port 44693] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-06-05 17:54:42,304 WARN [IPC Server handler 1 on default port 44693] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-06-05 17:54:42,497 WARN [Listener at localhost.localdomain/43289] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-05 17:54:42,498 DEBUG [Close-WAL-Writer-0] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987682064 is not closed yet, will try archiving it next time 2023-06-05 17:54:42,500 WARN [Listener at localhost.localdomain/43289] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:54:42,502 INFO [Listener at localhost.localdomain/43289] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:54:42,509 INFO [Listener at localhost.localdomain/43289] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/java.io.tmpdir/Jetty_localhost_45853_datanode____.s1kixw/webapp 2023-06-05 17:54:42,585 INFO [Listener at localhost.localdomain/43289] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45853 2023-06-05 17:54:42,597 WARN [Listener at localhost.localdomain/32877] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:54:42,673 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7655b7017bf0d090: Processing first storage report for DS-fd894fcc-241e-4914-9baf-f6c26f4e049d from datanode 26e04d54-7a51-4f38-b10a-c6db5ceed5e6 2023-06-05 17:54:42,674 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7655b7017bf0d090: from storage DS-fd894fcc-241e-4914-9baf-f6c26f4e049d node DatanodeRegistration(127.0.0.1:45029, datanodeUuid=26e04d54-7a51-4f38-b10a-c6db5ceed5e6, infoPort=37483, infoSecurePort=0, ipcPort=32877, storageInfo=lv=-57;cid=testClusterID;nsid=1790368170;c=1685987652982), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-05 17:54:42,674 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7655b7017bf0d090: Processing first storage report for DS-e7ba68a8-d710-4457-854c-dc02d09aa3ac from datanode 26e04d54-7a51-4f38-b10a-c6db5ceed5e6 2023-06-05 17:54:42,674 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7655b7017bf0d090: from storage DS-e7ba68a8-d710-4457-854c-dc02d09aa3ac node DatanodeRegistration(127.0.0.1:45029, datanodeUuid=26e04d54-7a51-4f38-b10a-c6db5ceed5e6, infoPort=37483, infoSecurePort=0, ipcPort=32877, storageInfo=lv=-57;cid=testClusterID;nsid=1790368170;c=1685987652982), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-05 17:54:42,708 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=8.40 KB at sequenceid=23 (bloomFilter=true), to=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10/.tmp/info/58a6a7ac315344bd8fe8fb596dce77e9 2023-06-05 17:54:42,719 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10/.tmp/info/58a6a7ac315344bd8fe8fb596dce77e9 as hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10/info/58a6a7ac315344bd8fe8fb596dce77e9 2023-06-05 17:54:42,727 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10/info/58a6a7ac315344bd8fe8fb596dce77e9, entries=7, sequenceid=23, filesize=12.1 K 2023-06-05 17:54:42,728 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~8.40 KB/8606, heapSize ~9.23 KB/9456, currentSize=0 B/0 for 5ef29a7cc137b39b01715319e4e12c10 in 438ms, sequenceid=23, compaction requested=false 2023-06-05 17:54:42,728 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5ef29a7cc137b39b01715319e4e12c10: 2023-06-05 17:54:42,728 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=22.1 K, sizeToCheck=16.0 K 2023-06-05 17:54:42,728 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-05 17:54:42,729 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10/info/58a6a7ac315344bd8fe8fb596dce77e9 because midkey is the same as first or last row 2023-06-05 17:54:43,282 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@d0c82bf] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:34911, datanodeUuid=5eac5c40-9ec0-417d-8823-6696584fe79b, infoPort=38745, infoSecurePort=0, ipcPort=44299, storageInfo=lv=-57;cid=testClusterID;nsid=1790368170;c=1685987652982):Failed to transfer BP-1666227208-148.251.75.209-1685987652982:blk_1073741841_1023 to 127.0.0.1:39575 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:43,282 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@13d61a07] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:34911, datanodeUuid=5eac5c40-9ec0-417d-8823-6696584fe79b, infoPort=38745, infoSecurePort=0, ipcPort=44299, storageInfo=lv=-57;cid=testClusterID;nsid=1790368170;c=1685987652982):Failed to transfer BP-1666227208-148.251.75.209-1685987652982:blk_1073741851_1033 to 127.0.0.1:39575 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:43,725 WARN [master/jenkins-hbase20:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:54:43,727 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C44127%2C1685987653485:(num 1685987653646) roll requested 2023-06-05 17:54:43,736 WARN [Thread-708] hdfs.DataStreamer(1658): Abandoning BP-1666227208-148.251.75.209-1685987652982:blk_1073741862_1044 2023-06-05 17:54:43,737 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:54:43,738 WARN [Thread-708] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39575,DS-60a1262f-b0eb-47db-be64-b71b104f8cef,DISK] 2023-06-05 17:54:43,738 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:54:43,747 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-06-05 17:54:43,748 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/MasterData/WALs/jenkins-hbase20.apache.org,44127,1685987653485/jenkins-hbase20.apache.org%2C44127%2C1685987653485.1685987653646 with entries=88, filesize=43.76 KB; new WAL /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/MasterData/WALs/jenkins-hbase20.apache.org,44127,1685987653485/jenkins-hbase20.apache.org%2C44127%2C1685987653485.1685987683727 2023-06-05 17:54:43,748 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34911,DS-626d1d6e-0ca1-4238-a441-4dba74ad675a,DISK], DatanodeInfoWithStorage[127.0.0.1:45029,DS-fd894fcc-241e-4914-9baf-f6c26f4e049d,DISK]] 2023-06-05 17:54:43,748 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/MasterData/WALs/jenkins-hbase20.apache.org,44127,1685987653485/jenkins-hbase20.apache.org%2C44127%2C1685987653485.1685987653646 is not closed yet, will try archiving it next time 2023-06-05 17:54:43,748 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:54:43,749 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/MasterData/WALs/jenkins-hbase20.apache.org,44127,1685987653485/jenkins-hbase20.apache.org%2C44127%2C1685987653485.1685987653646; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:54:44,284 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@674ccfbc] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:34911, datanodeUuid=5eac5c40-9ec0-417d-8823-6696584fe79b, infoPort=38745, infoSecurePort=0, ipcPort=44299, storageInfo=lv=-57;cid=testClusterID;nsid=1790368170;c=1685987652982):Failed to transfer BP-1666227208-148.251.75.209-1685987652982:blk_1073741861_1043 to 127.0.0.1:39575 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:55,675 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@2fc65c9a] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:45029, datanodeUuid=26e04d54-7a51-4f38-b10a-c6db5ceed5e6, infoPort=37483, infoSecurePort=0, ipcPort=32877, storageInfo=lv=-57;cid=testClusterID;nsid=1790368170;c=1685987652982):Failed to transfer BP-1666227208-148.251.75.209-1685987652982:blk_1073741837_1013 to 127.0.0.1:45297 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:55,675 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@42f493fe] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:45029, datanodeUuid=26e04d54-7a51-4f38-b10a-c6db5ceed5e6, infoPort=37483, infoSecurePort=0, ipcPort=32877, storageInfo=lv=-57;cid=testClusterID;nsid=1790368170;c=1685987652982):Failed to transfer BP-1666227208-148.251.75.209-1685987652982:blk_1073741835_1011 to 127.0.0.1:45297 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:54:56,678 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@63d9e4c7] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:45029, datanodeUuid=26e04d54-7a51-4f38-b10a-c6db5ceed5e6, infoPort=37483, infoSecurePort=0, ipcPort=32877, storageInfo=lv=-57;cid=testClusterID;nsid=1790368170;c=1685987652982):Failed to transfer BP-1666227208-148.251.75.209-1685987652982:blk_1073741831_1007 to 127.0.0.1:39575 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:01,230 WARN [Thread-727] hdfs.DataStreamer(1658): Abandoning BP-1666227208-148.251.75.209-1685987652982:blk_1073741864_1046 2023-06-05 17:55:01,230 WARN [Thread-727] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45297,DS-0c111fa7-8751-49bb-8765-c45cfe06bbff,DISK] 2023-06-05 17:55:01,241 INFO [Listener at localhost.localdomain/32877] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987682114 with entries=3, filesize=1.89 KB; new WAL /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987701226 2023-06-05 17:55:01,241 DEBUG [Listener at localhost.localdomain/32877] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34911,DS-626d1d6e-0ca1-4238-a441-4dba74ad675a,DISK], DatanodeInfoWithStorage[127.0.0.1:45029,DS-fd894fcc-241e-4914-9baf-f6c26f4e049d,DISK]] 2023-06-05 17:55:01,241 DEBUG [Listener at localhost.localdomain/32877] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987682114 is not closed yet, will try archiving it next time 2023-06-05 17:55:01,241 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987677935 to hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/oldWALs/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987677935 2023-06-05 17:55:01,249 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987682064 to hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/oldWALs/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987682064 2023-06-05 17:55:01,251 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987682114 to hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/oldWALs/jenkins-hbase20.apache.org%2C38597%2C1685987654787.1685987682114 2023-06-05 17:55:01,251 INFO [sync.1] wal.FSHLog(774): LowReplication-Roller was enabled. 2023-06-05 17:55:01,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38597] regionserver.HRegion(9158): Flush requested on 5ef29a7cc137b39b01715319e4e12c10 2023-06-05 17:55:01,260 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5ef29a7cc137b39b01715319e4e12c10 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-05 17:55:01,265 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-05 17:55:01,266 INFO [Listener at localhost.localdomain/32877] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-06-05 17:55:01,266 WARN [Thread-734] hdfs.DataStreamer(1658): Abandoning BP-1666227208-148.251.75.209-1685987652982:blk_1073741866_1048 2023-06-05 17:55:01,266 DEBUG [Listener at localhost.localdomain/32877] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x480ad75b to 127.0.0.1:53420 2023-06-05 17:55:01,266 DEBUG [Listener at localhost.localdomain/32877] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:55:01,266 DEBUG [Listener at localhost.localdomain/32877] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-05 17:55:01,266 DEBUG [Listener at localhost.localdomain/32877] util.JVMClusterUtil(257): Found active master hash=1633652648, stopped=false 2023-06-05 17:55:01,266 INFO [Listener at localhost.localdomain/32877] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,44127,1685987653485 2023-06-05 17:55:01,267 WARN [Thread-734] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45297,DS-0c111fa7-8751-49bb-8765-c45cfe06bbff,DISK] 2023-06-05 17:55:01,268 INFO [Listener at localhost.localdomain/32877] procedure2.ProcedureExecutor(629): Stopping 2023-06-05 17:55:01,268 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): regionserver:36201-0x101bc680f800001, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-05 17:55:01,268 DEBUG [Listener at localhost.localdomain/32877] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4dbfa9d1 to 127.0.0.1:53420 2023-06-05 17:55:01,268 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): regionserver:38597-0x101bc680f800005, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-05 17:55:01,268 DEBUG [Listener at localhost.localdomain/32877] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:55:01,269 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-05 17:55:01,269 INFO [Listener at localhost.localdomain/32877] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,36201,1685987653534' ***** 2023-06-05 17:55:01,269 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:55:01,269 INFO [Listener at localhost.localdomain/32877] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-05 17:55:01,270 INFO [Listener at localhost.localdomain/32877] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,38597,1685987654787' ***** 2023-06-05 17:55:01,270 INFO [Listener at localhost.localdomain/32877] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-05 17:55:01,270 INFO [RS:0;jenkins-hbase20:36201] regionserver.HeapMemoryManager(220): Stopping 2023-06-05 17:55:01,270 INFO [RS:1;jenkins-hbase20:38597] regionserver.HeapMemoryManager(220): Stopping 2023-06-05 17:55:01,270 INFO [RS:0;jenkins-hbase20:36201] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-05 17:55:01,270 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-05 17:55:01,270 INFO [RS:0;jenkins-hbase20:36201] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-05 17:55:01,270 INFO [RS:0;jenkins-hbase20:36201] regionserver.HRegionServer(3303): Received CLOSE for ea5dea98f3db6033eda5fa365120d0e4 2023-06-05 17:55:01,270 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38597-0x101bc680f800005, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:55:01,270 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:55:01,271 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36201-0x101bc680f800001, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:55:01,272 INFO [RS:0;jenkins-hbase20:36201] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,36201,1685987653534 2023-06-05 17:55:01,273 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing ea5dea98f3db6033eda5fa365120d0e4, disabling compactions & flushes 2023-06-05 17:55:01,273 DEBUG [RS:0;jenkins-hbase20:36201] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x79134eab to 127.0.0.1:53420 2023-06-05 17:55:01,273 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4. 2023-06-05 17:55:01,273 DEBUG [RS:0;jenkins-hbase20:36201] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:55:01,273 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4. 2023-06-05 17:55:01,273 INFO [RS:0;jenkins-hbase20:36201] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-05 17:55:01,273 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4. after waiting 0 ms 2023-06-05 17:55:01,273 INFO [RS:0;jenkins-hbase20:36201] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-05 17:55:01,273 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4. 2023-06-05 17:55:01,273 INFO [RS:0;jenkins-hbase20:36201] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-05 17:55:01,273 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing ea5dea98f3db6033eda5fa365120d0e4 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-05 17:55:01,273 INFO [RS:0;jenkins-hbase20:36201] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-05 17:55:01,274 INFO [RS:0;jenkins-hbase20:36201] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-06-05 17:55:01,274 DEBUG [RS:0;jenkins-hbase20:36201] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, ea5dea98f3db6033eda5fa365120d0e4=hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4.} 2023-06-05 17:55:01,274 DEBUG [RS:0;jenkins-hbase20:36201] regionserver.HRegionServer(1504): Waiting on 1588230740, ea5dea98f3db6033eda5fa365120d0e4 2023-06-05 17:55:01,274 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-05 17:55:01,274 WARN [RS:0;jenkins-hbase20:36201.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=7, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:01,274 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-05 17:55:01,275 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C36201%2C1685987653534:(num 1685987653939) roll requested 2023-06-05 17:55:01,275 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-05 17:55:01,275 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for ea5dea98f3db6033eda5fa365120d0e4: 2023-06-05 17:55:01,275 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-05 17:55:01,275 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-05 17:55:01,276 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.93 KB heapSize=5.45 KB 2023-06-05 17:55:01,276 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase20.apache.org,36201,1685987653534: Unrecoverable exception while closing hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4. ***** org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:01,276 WARN [RS_OPEN_META-regionserver/jenkins-hbase20:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:01,279 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-06-05 17:55:01,281 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-05 17:55:01,282 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-06-05 17:55:01,291 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-06-05 17:55:01,293 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-06-05 17:55:01,293 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-06-05 17:55:01,293 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-06-05 17:55:01,293 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1024983040, "init": 524288000, "max": 2051014656, "used": 322019712 }, "NonHeapMemoryUsage": { "committed": 133849088, "init": 2555904, "max": -1, "used": 131189544 }, "Verbose": false, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-06-05 17:55:01,295 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=33 (bloomFilter=true), to=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10/.tmp/info/bee67e8cb0f7473d8df53f04eac20a1d 2023-06-05 17:55:01,300 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-393471360_17 at /127.0.0.1:55588 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741868_1050]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data8/current]'}, localName='127.0.0.1:34911', datanodeUuid='5eac5c40-9ec0-417d-8823-6696584fe79b', xmitsInProgress=0}:Exception transfering block BP-1666227208-148.251.75.209-1685987652982:blk_1073741868_1050 to mirror 127.0.0.1:45297: java.net.ConnectException: Connection refused 2023-06-05 17:55:01,300 WARN [Thread-741] hdfs.DataStreamer(1658): Abandoning BP-1666227208-148.251.75.209-1685987652982:blk_1073741868_1050 2023-06-05 17:55:01,301 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-393471360_17 at /127.0.0.1:55588 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741868_1050]] datanode.DataXceiver(323): 127.0.0.1:34911:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:55588 dst: /127.0.0.1:34911 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:01,304 WARN [Thread-741] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45297,DS-0c111fa7-8751-49bb-8765-c45cfe06bbff,DISK] 2023-06-05 17:55:01,305 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44127] master.MasterRpcServices(609): jenkins-hbase20.apache.org,36201,1685987653534 reported a fatal error: ***** ABORTING region server jenkins-hbase20.apache.org,36201,1685987653534: Unrecoverable exception while closing hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4. ***** Cause: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:01,314 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10/.tmp/info/bee67e8cb0f7473d8df53f04eac20a1d as hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10/info/bee67e8cb0f7473d8df53f04eac20a1d 2023-06-05 17:55:01,321 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL 2023-06-05 17:55:01,321 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,36201,1685987653534/jenkins-hbase20.apache.org%2C36201%2C1685987653534.1685987653939 with entries=3, filesize=601 B; new WAL /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,36201,1685987653534/jenkins-hbase20.apache.org%2C36201%2C1685987653534.1685987701275 2023-06-05 17:55:01,322 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34911,DS-626d1d6e-0ca1-4238-a441-4dba74ad675a,DISK], DatanodeInfoWithStorage[127.0.0.1:45029,DS-fd894fcc-241e-4914-9baf-f6c26f4e049d,DISK]] 2023-06-05 17:55:01,322 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:01,322 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,36201,1685987653534/jenkins-hbase20.apache.org%2C36201%2C1685987653534.1685987653939 is not closed yet, will try archiving it next time 2023-06-05 17:55:01,322 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,36201,1685987653534/jenkins-hbase20.apache.org%2C36201%2C1685987653534.1685987653939; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:01,322 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C36201%2C1685987653534.meta:.meta(num 1685987654114) roll requested 2023-06-05 17:55:01,333 WARN [Thread-751] hdfs.DataStreamer(1658): Abandoning BP-1666227208-148.251.75.209-1685987652982:blk_1073741870_1052 2023-06-05 17:55:01,333 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10/info/bee67e8cb0f7473d8df53f04eac20a1d, entries=7, sequenceid=33, filesize=12.1 K 2023-06-05 17:55:01,334 WARN [Thread-751] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45297,DS-0c111fa7-8751-49bb-8765-c45cfe06bbff,DISK] 2023-06-05 17:55:01,335 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=3.15 KB/3228 for 5ef29a7cc137b39b01715319e4e12c10 in 75ms, sequenceid=33, compaction requested=true 2023-06-05 17:55:01,335 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5ef29a7cc137b39b01715319e4e12c10: 2023-06-05 17:55:01,335 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=34.2 K, sizeToCheck=16.0 K 2023-06-05 17:55:01,335 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-05 17:55:01,335 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10/info/bee67e8cb0f7473d8df53f04eac20a1d because midkey is the same as first or last row 2023-06-05 17:55:01,335 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-05 17:55:01,335 INFO [RS:1;jenkins-hbase20:38597] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-05 17:55:01,335 INFO [RS:1;jenkins-hbase20:38597] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-05 17:55:01,335 INFO [RS:1;jenkins-hbase20:38597] regionserver.HRegionServer(3303): Received CLOSE for 5ef29a7cc137b39b01715319e4e12c10 2023-06-05 17:55:01,337 INFO [RS:1;jenkins-hbase20:38597] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,38597,1685987654787 2023-06-05 17:55:01,337 DEBUG [RS:1;jenkins-hbase20:38597] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x18fea8e3 to 127.0.0.1:53420 2023-06-05 17:55:01,337 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 5ef29a7cc137b39b01715319e4e12c10, disabling compactions & flushes 2023-06-05 17:55:01,337 DEBUG [RS:1;jenkins-hbase20:38597] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:55:01,337 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685987654887.5ef29a7cc137b39b01715319e4e12c10. 2023-06-05 17:55:01,337 INFO [RS:1;jenkins-hbase20:38597] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-06-05 17:55:01,337 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685987654887.5ef29a7cc137b39b01715319e4e12c10. 2023-06-05 17:55:01,337 DEBUG [RS:1;jenkins-hbase20:38597] regionserver.HRegionServer(1478): Online Regions={5ef29a7cc137b39b01715319e4e12c10=TestLogRolling-testLogRollOnDatanodeDeath,,1685987654887.5ef29a7cc137b39b01715319e4e12c10.} 2023-06-05 17:55:01,337 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685987654887.5ef29a7cc137b39b01715319e4e12c10. after waiting 0 ms 2023-06-05 17:55:01,337 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685987654887.5ef29a7cc137b39b01715319e4e12c10. 2023-06-05 17:55:01,337 DEBUG [RS:1;jenkins-hbase20:38597] regionserver.HRegionServer(1504): Waiting on 5ef29a7cc137b39b01715319e4e12c10 2023-06-05 17:55:01,337 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 5ef29a7cc137b39b01715319e4e12c10 1/1 column families, dataSize=3.15 KB heapSize=3.63 KB 2023-06-05 17:55:01,357 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL 2023-06-05 17:55:01,360 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,36201,1685987653534/jenkins-hbase20.apache.org%2C36201%2C1685987653534.meta.1685987654114.meta with entries=11, filesize=3.69 KB; new WAL /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,36201,1685987653534/jenkins-hbase20.apache.org%2C36201%2C1685987653534.meta.1685987701322.meta 2023-06-05 17:55:01,364 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34911,DS-626d1d6e-0ca1-4238-a441-4dba74ad675a,DISK], DatanodeInfoWithStorage[127.0.0.1:45029,DS-fd894fcc-241e-4914-9baf-f6c26f4e049d,DISK]] 2023-06-05 17:55:01,364 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:01,364 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,36201,1685987653534/jenkins-hbase20.apache.org%2C36201%2C1685987653534.meta.1685987654114.meta is not closed yet, will try archiving it next time 2023-06-05 17:55:01,364 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,36201,1685987653534/jenkins-hbase20.apache.org%2C36201%2C1685987653534.meta.1685987654114.meta; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40493,DS-24e6dee6-c991-4bcd-be7a-92c64d4b7698,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:01,368 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-734844736_17 at /127.0.0.1:55628 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741872_1054]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data8/current]'}, localName='127.0.0.1:34911', datanodeUuid='5eac5c40-9ec0-417d-8823-6696584fe79b', xmitsInProgress=0}:Exception transfering block BP-1666227208-148.251.75.209-1685987652982:blk_1073741872_1054 to mirror 127.0.0.1:45297: java.net.ConnectException: Connection refused 2023-06-05 17:55:01,368 WARN [Thread-757] hdfs.DataStreamer(1658): Abandoning BP-1666227208-148.251.75.209-1685987652982:blk_1073741872_1054 2023-06-05 17:55:01,369 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-734844736_17 at /127.0.0.1:55628 [Receiving block BP-1666227208-148.251.75.209-1685987652982:blk_1073741872_1054]] datanode.DataXceiver(323): 127.0.0.1:34911:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:55628 dst: /127.0.0.1:34911 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:01,369 WARN [Thread-757] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45297,DS-0c111fa7-8751-49bb-8765-c45cfe06bbff,DISK] 2023-06-05 17:55:01,381 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.15 KB at sequenceid=39 (bloomFilter=true), to=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10/.tmp/info/c4d985c3bc7c419ba695f1d5b48ef88b 2023-06-05 17:55:01,392 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10/.tmp/info/c4d985c3bc7c419ba695f1d5b48ef88b as hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10/info/c4d985c3bc7c419ba695f1d5b48ef88b 2023-06-05 17:55:01,400 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10/info/c4d985c3bc7c419ba695f1d5b48ef88b, entries=3, sequenceid=39, filesize=7.9 K 2023-06-05 17:55:01,401 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.15 KB/3228, heapSize ~3.61 KB/3696, currentSize=0 B/0 for 5ef29a7cc137b39b01715319e4e12c10 in 64ms, sequenceid=39, compaction requested=true 2023-06-05 17:55:01,415 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/data/default/TestLogRolling-testLogRollOnDatanodeDeath/5ef29a7cc137b39b01715319e4e12c10/recovered.edits/42.seqid, newMaxSeqId=42, maxSeqId=1 2023-06-05 17:55:01,416 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685987654887.5ef29a7cc137b39b01715319e4e12c10. 2023-06-05 17:55:01,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 5ef29a7cc137b39b01715319e4e12c10: 2023-06-05 17:55:01,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685987654887.5ef29a7cc137b39b01715319e4e12c10. 2023-06-05 17:55:01,474 INFO [RS:0;jenkins-hbase20:36201] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-05 17:55:01,474 INFO [RS:0;jenkins-hbase20:36201] regionserver.HRegionServer(3303): Received CLOSE for ea5dea98f3db6033eda5fa365120d0e4 2023-06-05 17:55:01,474 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-05 17:55:01,474 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing ea5dea98f3db6033eda5fa365120d0e4, disabling compactions & flushes 2023-06-05 17:55:01,474 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-05 17:55:01,474 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-05 17:55:01,474 DEBUG [RS:0;jenkins-hbase20:36201] regionserver.HRegionServer(1504): Waiting on 1588230740, ea5dea98f3db6033eda5fa365120d0e4 2023-06-05 17:55:01,474 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-05 17:55:01,475 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-05 17:55:01,474 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4. 2023-06-05 17:55:01,475 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-05 17:55:01,475 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4. 2023-06-05 17:55:01,475 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-06-05 17:55:01,475 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4. after waiting 0 ms 2023-06-05 17:55:01,475 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4. 2023-06-05 17:55:01,475 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for ea5dea98f3db6033eda5fa365120d0e4: 2023-06-05 17:55:01,475 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1685987654182.ea5dea98f3db6033eda5fa365120d0e4. 2023-06-05 17:55:01,537 INFO [RS:1;jenkins-hbase20:38597] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,38597,1685987654787; all regions closed. 2023-06-05 17:55:01,538 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,38597,1685987654787 2023-06-05 17:55:01,545 DEBUG [RS:1;jenkins-hbase20:38597] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/oldWALs 2023-06-05 17:55:01,545 INFO [RS:1;jenkins-hbase20:38597] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C38597%2C1685987654787:(num 1685987701226) 2023-06-05 17:55:01,545 DEBUG [RS:1;jenkins-hbase20:38597] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:55:01,545 INFO [RS:1;jenkins-hbase20:38597] regionserver.LeaseManager(133): Closed leases 2023-06-05 17:55:01,546 INFO [RS:1;jenkins-hbase20:38597] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-05 17:55:01,546 INFO [RS:1;jenkins-hbase20:38597] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-05 17:55:01,546 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-05 17:55:01,546 INFO [RS:1;jenkins-hbase20:38597] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-05 17:55:01,546 INFO [RS:1;jenkins-hbase20:38597] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-05 17:55:01,547 INFO [RS:1;jenkins-hbase20:38597] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:38597 2023-06-05 17:55:01,550 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): regionserver:38597-0x101bc680f800005, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38597,1685987654787 2023-06-05 17:55:01,550 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): regionserver:38597-0x101bc680f800005, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-05 17:55:01,550 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-05 17:55:01,550 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): regionserver:36201-0x101bc680f800001, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38597,1685987654787 2023-06-05 17:55:01,551 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): regionserver:36201-0x101bc680f800001, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-05 17:55:01,551 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,38597,1685987654787] 2023-06-05 17:55:01,552 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,38597,1685987654787; numProcessing=1 2023-06-05 17:55:01,553 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,38597,1685987654787 already deleted, retry=false 2023-06-05 17:55:01,553 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,38597,1685987654787 expired; onlineServers=1 2023-06-05 17:55:01,675 INFO [RS:0;jenkins-hbase20:36201] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-06-05 17:55:01,675 INFO [RS:0;jenkins-hbase20:36201] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,36201,1685987653534; all regions closed. 2023-06-05 17:55:01,675 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@46031aa0] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:45029, datanodeUuid=26e04d54-7a51-4f38-b10a-c6db5ceed5e6, infoPort=37483, infoSecurePort=0, ipcPort=32877, storageInfo=lv=-57;cid=testClusterID;nsid=1790368170;c=1685987652982):Failed to transfer BP-1666227208-148.251.75.209-1685987652982:blk_1073741825_1001 to 127.0.0.1:45297 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:01,675 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@43ea97d4] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:45029, datanodeUuid=26e04d54-7a51-4f38-b10a-c6db5ceed5e6, infoPort=37483, infoSecurePort=0, ipcPort=32877, storageInfo=lv=-57;cid=testClusterID;nsid=1790368170;c=1685987652982):Failed to transfer BP-1666227208-148.251.75.209-1685987652982:blk_1073741836_1012 to 127.0.0.1:45297 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:01,675 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,36201,1685987653534 2023-06-05 17:55:01,679 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/WALs/jenkins-hbase20.apache.org,36201,1685987653534 2023-06-05 17:55:01,682 DEBUG [RS:0;jenkins-hbase20:36201] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:55:01,683 INFO [RS:0;jenkins-hbase20:36201] regionserver.LeaseManager(133): Closed leases 2023-06-05 17:55:01,683 INFO [RS:0;jenkins-hbase20:36201] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-06-05 17:55:01,683 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-05 17:55:01,684 INFO [RS:0;jenkins-hbase20:36201] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:36201 2023-06-05 17:55:01,685 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-05 17:55:01,685 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): regionserver:36201-0x101bc680f800001, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,36201,1685987653534 2023-06-05 17:55:01,686 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,36201,1685987653534] 2023-06-05 17:55:01,686 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,36201,1685987653534; numProcessing=2 2023-06-05 17:55:01,687 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,36201,1685987653534 already deleted, retry=false 2023-06-05 17:55:01,687 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,36201,1685987653534 expired; onlineServers=0 2023-06-05 17:55:01,687 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,44127,1685987653485' ***** 2023-06-05 17:55:01,687 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-05 17:55:01,687 DEBUG [M:0;jenkins-hbase20:44127] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6560075b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-05 17:55:01,687 INFO [M:0;jenkins-hbase20:44127] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,44127,1685987653485 2023-06-05 17:55:01,687 INFO [M:0;jenkins-hbase20:44127] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,44127,1685987653485; all regions closed. 2023-06-05 17:55:01,687 DEBUG [M:0;jenkins-hbase20:44127] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:55:01,687 DEBUG [M:0;jenkins-hbase20:44127] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-05 17:55:01,688 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-05 17:55:01,688 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685987653725] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685987653725,5,FailOnTimeoutGroup] 2023-06-05 17:55:01,688 DEBUG [M:0;jenkins-hbase20:44127] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-05 17:55:01,688 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685987653725] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685987653725,5,FailOnTimeoutGroup] 2023-06-05 17:55:01,688 INFO [M:0;jenkins-hbase20:44127] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-05 17:55:01,689 INFO [M:0;jenkins-hbase20:44127] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-05 17:55:01,689 INFO [M:0;jenkins-hbase20:44127] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-06-05 17:55:01,689 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-05 17:55:01,689 DEBUG [M:0;jenkins-hbase20:44127] master.HMaster(1512): Stopping service threads 2023-06-05 17:55:01,689 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:55:01,689 INFO [M:0;jenkins-hbase20:44127] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-05 17:55:01,690 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-05 17:55:01,690 ERROR [M:0;jenkins-hbase20:44127] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-05 17:55:01,690 INFO [M:0;jenkins-hbase20:44127] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-05 17:55:01,690 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-05 17:55:01,691 DEBUG [M:0;jenkins-hbase20:44127] zookeeper.ZKUtil(398): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-05 17:55:01,691 WARN [M:0;jenkins-hbase20:44127] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-05 17:55:01,691 INFO [M:0;jenkins-hbase20:44127] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-05 17:55:01,691 INFO [M:0;jenkins-hbase20:44127] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-05 17:55:01,692 DEBUG [M:0;jenkins-hbase20:44127] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-05 17:55:01,692 INFO [M:0;jenkins-hbase20:44127] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:55:01,692 DEBUG [M:0;jenkins-hbase20:44127] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:55:01,692 DEBUG [M:0;jenkins-hbase20:44127] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-05 17:55:01,692 DEBUG [M:0;jenkins-hbase20:44127] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:55:01,692 INFO [M:0;jenkins-hbase20:44127] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.13 KB heapSize=45.77 KB 2023-06-05 17:55:01,701 WARN [Thread-768] hdfs.DataStreamer(1658): Abandoning BP-1666227208-148.251.75.209-1685987652982:blk_1073741874_1056 2023-06-05 17:55:01,702 WARN [Thread-768] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45297,DS-0c111fa7-8751-49bb-8765-c45cfe06bbff,DISK] 2023-06-05 17:55:01,710 INFO [M:0;jenkins-hbase20:44127] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.13 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/fd492180b0444baeb3972daa34b38c47 2023-06-05 17:55:01,717 DEBUG [M:0;jenkins-hbase20:44127] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/fd492180b0444baeb3972daa34b38c47 as hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/fd492180b0444baeb3972daa34b38c47 2023-06-05 17:55:01,723 INFO [M:0;jenkins-hbase20:44127] regionserver.HStore(1080): Added hdfs://localhost.localdomain:44693/user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/fd492180b0444baeb3972daa34b38c47, entries=11, sequenceid=92, filesize=7.0 K 2023-06-05 17:55:01,724 INFO [M:0;jenkins-hbase20:44127] regionserver.HRegion(2948): Finished flush of dataSize ~38.13 KB/39047, heapSize ~45.75 KB/46848, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 32ms, sequenceid=92, compaction requested=false 2023-06-05 17:55:01,725 INFO [M:0;jenkins-hbase20:44127] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:55:01,725 DEBUG [M:0;jenkins-hbase20:44127] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-05 17:55:01,726 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/1e4a8a7f-4ece-db18-8366-afff0b49bd96/MasterData/WALs/jenkins-hbase20.apache.org,44127,1685987653485 2023-06-05 17:55:01,729 INFO [M:0;jenkins-hbase20:44127] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-05 17:55:01,729 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-05 17:55:01,730 INFO [M:0;jenkins-hbase20:44127] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:44127 2023-06-05 17:55:01,731 DEBUG [M:0;jenkins-hbase20:44127] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,44127,1685987653485 already deleted, retry=false 2023-06-05 17:55:01,770 INFO [RS:1;jenkins-hbase20:38597] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,38597,1685987654787; zookeeper connection closed. 2023-06-05 17:55:01,770 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): regionserver:38597-0x101bc680f800005, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:55:01,770 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): regionserver:38597-0x101bc680f800005, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:55:01,770 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@308b805c] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@308b805c 2023-06-05 17:55:01,811 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-05 17:55:01,870 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:55:01,870 INFO [M:0;jenkins-hbase20:44127] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,44127,1685987653485; zookeeper connection closed. 2023-06-05 17:55:01,870 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): master:44127-0x101bc680f800000, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:55:01,970 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): regionserver:36201-0x101bc680f800001, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:55:01,970 INFO [RS:0;jenkins-hbase20:36201] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,36201,1685987653534; zookeeper connection closed. 2023-06-05 17:55:01,970 DEBUG [Listener at localhost.localdomain/37547-EventThread] zookeeper.ZKWatcher(600): regionserver:36201-0x101bc680f800001, quorum=127.0.0.1:53420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:55:01,971 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4e304de5] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4e304de5 2023-06-05 17:55:01,971 INFO [Listener at localhost.localdomain/32877] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 2 regionserver(s) complete 2023-06-05 17:55:01,972 WARN [Listener at localhost.localdomain/32877] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-05 17:55:01,976 INFO [Listener at localhost.localdomain/32877] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-05 17:55:02,088 WARN [BP-1666227208-148.251.75.209-1685987652982 heartbeating to localhost.localdomain/127.0.0.1:44693] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-05 17:55:02,088 WARN [BP-1666227208-148.251.75.209-1685987652982 heartbeating to localhost.localdomain/127.0.0.1:44693] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1666227208-148.251.75.209-1685987652982 (Datanode Uuid 26e04d54-7a51-4f38-b10a-c6db5ceed5e6) service to localhost.localdomain/127.0.0.1:44693 2023-06-05 17:55:02,088 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data3/current/BP-1666227208-148.251.75.209-1685987652982] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:55:02,089 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data4/current/BP-1666227208-148.251.75.209-1685987652982] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:55:02,093 WARN [Listener at localhost.localdomain/32877] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-05 17:55:02,097 INFO [Listener at localhost.localdomain/32877] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-05 17:55:02,200 WARN [BP-1666227208-148.251.75.209-1685987652982 heartbeating to localhost.localdomain/127.0.0.1:44693] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-05 17:55:02,200 WARN [BP-1666227208-148.251.75.209-1685987652982 heartbeating to localhost.localdomain/127.0.0.1:44693] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1666227208-148.251.75.209-1685987652982 (Datanode Uuid 5eac5c40-9ec0-417d-8823-6696584fe79b) service to localhost.localdomain/127.0.0.1:44693 2023-06-05 17:55:02,201 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data7/current/BP-1666227208-148.251.75.209-1685987652982] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:55:02,201 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/cluster_98779272-068b-8a18-78b9-03ff396a6c58/dfs/data/data8/current/BP-1666227208-148.251.75.209-1685987652982] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:55:02,219 INFO [Listener at localhost.localdomain/32877] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-06-05 17:55:02,335 INFO [Listener at localhost.localdomain/32877] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-05 17:55:02,370 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-05 17:55:02,381 INFO [Listener at localhost.localdomain/32877] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=77 (was 52) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:44693 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.1@localhost.localdomain:44693 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.2@localhost.localdomain:44693 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Abort regionserver monitor java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: nioEventLoopGroup-13-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-12-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: nioEventLoopGroup-12-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/32877 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-12-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-13-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1624275195) connection to localhost.localdomain/127.0.0.1:44693 from jenkins.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ForkJoinPool-3-worker-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: IPC Client (1624275195) connection to localhost.localdomain/127.0.0.1:44693 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (1624275195) connection to localhost.localdomain/127.0.0.1:44693 from jenkins.hfs.1 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-17-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-13-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-3-worker-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: IPC Client (1624275195) connection to localhost.localdomain/127.0.0.1:44693 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) - Thread LEAK? -, OpenFileDescriptor=471 (was 439) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=127 (was 106) - SystemLoadAverage LEAK? -, ProcessCount=169 (was 169), AvailableMemoryMB=7372 (was 7137) - AvailableMemoryMB LEAK? - 2023-06-05 17:55:02,389 INFO [Listener at localhost.localdomain/32877] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=77, OpenFileDescriptor=471, MaxFileDescriptor=60000, SystemLoadAverage=127, ProcessCount=169, AvailableMemoryMB=7372 2023-06-05 17:55:02,390 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-05 17:55:02,390 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/hadoop.log.dir so I do NOT create it in target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db 2023-06-05 17:55:02,390 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f4db9bf8-a5a7-3da6-125c-c835b306a47d/hadoop.tmp.dir so I do NOT create it in target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db 2023-06-05 17:55:02,390 INFO [Listener at localhost.localdomain/32877] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/cluster_abce875d-ad65-ad0a-f24c-ab0c5aece5bd, deleteOnExit=true 2023-06-05 17:55:02,390 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-05 17:55:02,390 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/test.cache.data in system properties and HBase conf 2023-06-05 17:55:02,390 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/hadoop.tmp.dir in system properties and HBase conf 2023-06-05 17:55:02,390 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/hadoop.log.dir in system properties and HBase conf 2023-06-05 17:55:02,391 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-05 17:55:02,391 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-05 17:55:02,391 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-05 17:55:02,391 DEBUG [Listener at localhost.localdomain/32877] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-05 17:55:02,391 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-05 17:55:02,391 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-05 17:55:02,391 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-05 17:55:02,391 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-05 17:55:02,391 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-05 17:55:02,391 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-05 17:55:02,392 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-05 17:55:02,392 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-05 17:55:02,392 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-05 17:55:02,392 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/nfs.dump.dir in system properties and HBase conf 2023-06-05 17:55:02,392 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/java.io.tmpdir in system properties and HBase conf 2023-06-05 17:55:02,392 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-05 17:55:02,392 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-05 17:55:02,392 INFO [Listener at localhost.localdomain/32877] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-05 17:55:02,394 WARN [Listener at localhost.localdomain/32877] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-05 17:55:02,395 WARN [Listener at localhost.localdomain/32877] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-05 17:55:02,395 WARN [Listener at localhost.localdomain/32877] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-05 17:55:02,424 WARN [Listener at localhost.localdomain/32877] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:55:02,427 INFO [Listener at localhost.localdomain/32877] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:55:02,433 INFO [Listener at localhost.localdomain/32877] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/java.io.tmpdir/Jetty_localhost_localdomain_34995_hdfs____.vgzy8i/webapp 2023-06-05 17:55:02,540 INFO [Listener at localhost.localdomain/32877] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:34995 2023-06-05 17:55:02,542 WARN [Listener at localhost.localdomain/32877] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-05 17:55:02,543 WARN [Listener at localhost.localdomain/32877] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-05 17:55:02,544 WARN [Listener at localhost.localdomain/32877] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-05 17:55:02,603 WARN [Listener at localhost.localdomain/44149] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:55:02,620 WARN [Listener at localhost.localdomain/44149] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-05 17:55:02,624 WARN [Listener at localhost.localdomain/44149] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:55:02,625 INFO [Listener at localhost.localdomain/44149] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:55:02,632 INFO [Listener at localhost.localdomain/44149] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/java.io.tmpdir/Jetty_localhost_34801_datanode____9j0jdr/webapp 2023-06-05 17:55:02,715 INFO [Listener at localhost.localdomain/44149] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34801 2023-06-05 17:55:02,725 WARN [Listener at localhost.localdomain/36239] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:55:02,747 WARN [Listener at localhost.localdomain/36239] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-05 17:55:02,750 WARN [Listener at localhost.localdomain/36239] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:55:02,751 INFO [Listener at localhost.localdomain/36239] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:55:02,755 INFO [Listener at localhost.localdomain/36239] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/java.io.tmpdir/Jetty_localhost_34031_datanode____.1x3gng/webapp 2023-06-05 17:55:02,829 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4cdbfe57583b5943: Processing first storage report for DS-f91502fb-6ebb-4d50-8083-e223d241b5c8 from datanode 06eb08d6-de3f-4d7c-942b-f8394d321ea0 2023-06-05 17:55:02,829 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4cdbfe57583b5943: from storage DS-f91502fb-6ebb-4d50-8083-e223d241b5c8 node DatanodeRegistration(127.0.0.1:43909, datanodeUuid=06eb08d6-de3f-4d7c-942b-f8394d321ea0, infoPort=43603, infoSecurePort=0, ipcPort=36239, storageInfo=lv=-57;cid=testClusterID;nsid=27364690;c=1685987702398), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:55:02,829 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4cdbfe57583b5943: Processing first storage report for DS-825459d5-0650-4d60-b8e2-228314b4b274 from datanode 06eb08d6-de3f-4d7c-942b-f8394d321ea0 2023-06-05 17:55:02,829 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4cdbfe57583b5943: from storage DS-825459d5-0650-4d60-b8e2-228314b4b274 node DatanodeRegistration(127.0.0.1:43909, datanodeUuid=06eb08d6-de3f-4d7c-942b-f8394d321ea0, infoPort=43603, infoSecurePort=0, ipcPort=36239, storageInfo=lv=-57;cid=testClusterID;nsid=27364690;c=1685987702398), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:55:02,839 INFO [Listener at localhost.localdomain/36239] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34031 2023-06-05 17:55:02,846 WARN [Listener at localhost.localdomain/38071] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:55:02,857 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-05 17:55:02,917 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbd4af196e33c3fc7: Processing first storage report for DS-2c4df25d-980e-464d-adb8-91c953cf91d6 from datanode f86d8251-b914-47c8-b35d-f50ac8b70d7f 2023-06-05 17:55:02,917 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbd4af196e33c3fc7: from storage DS-2c4df25d-980e-464d-adb8-91c953cf91d6 node DatanodeRegistration(127.0.0.1:45237, datanodeUuid=f86d8251-b914-47c8-b35d-f50ac8b70d7f, infoPort=38887, infoSecurePort=0, ipcPort=38071, storageInfo=lv=-57;cid=testClusterID;nsid=27364690;c=1685987702398), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:55:02,917 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbd4af196e33c3fc7: Processing first storage report for DS-dc110e91-e793-428a-95dc-325d8550ee9b from datanode f86d8251-b914-47c8-b35d-f50ac8b70d7f 2023-06-05 17:55:02,917 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbd4af196e33c3fc7: from storage DS-dc110e91-e793-428a-95dc-325d8550ee9b node DatanodeRegistration(127.0.0.1:45237, datanodeUuid=f86d8251-b914-47c8-b35d-f50ac8b70d7f, infoPort=38887, infoSecurePort=0, ipcPort=38071, storageInfo=lv=-57;cid=testClusterID;nsid=27364690;c=1685987702398), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:55:02,959 DEBUG [Listener at localhost.localdomain/38071] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db 2023-06-05 17:55:02,994 INFO [Listener at localhost.localdomain/38071] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/cluster_abce875d-ad65-ad0a-f24c-ab0c5aece5bd/zookeeper_0, clientPort=62057, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/cluster_abce875d-ad65-ad0a-f24c-ab0c5aece5bd/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/cluster_abce875d-ad65-ad0a-f24c-ab0c5aece5bd/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-05 17:55:03,002 INFO [Listener at localhost.localdomain/38071] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=62057 2023-06-05 17:55:03,002 INFO [Listener at localhost.localdomain/38071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:55:03,003 INFO [Listener at localhost.localdomain/38071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:55:03,027 INFO [Listener at localhost.localdomain/38071] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66 with version=8 2023-06-05 17:55:03,027 INFO [Listener at localhost.localdomain/38071] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/hbase-staging 2023-06-05 17:55:03,029 INFO [Listener at localhost.localdomain/38071] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-06-05 17:55:03,029 INFO [Listener at localhost.localdomain/38071] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:55:03,029 INFO [Listener at localhost.localdomain/38071] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-05 17:55:03,029 INFO [Listener at localhost.localdomain/38071] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-05 17:55:03,029 INFO [Listener at localhost.localdomain/38071] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:55:03,029 INFO [Listener at localhost.localdomain/38071] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-05 17:55:03,029 INFO [Listener at localhost.localdomain/38071] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-05 17:55:03,031 INFO [Listener at localhost.localdomain/38071] ipc.NettyRpcServer(120): Bind to /148.251.75.209:39347 2023-06-05 17:55:03,031 INFO [Listener at localhost.localdomain/38071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:55:03,032 INFO [Listener at localhost.localdomain/38071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:55:03,033 INFO [Listener at localhost.localdomain/38071] zookeeper.RecoverableZooKeeper(93): Process identifier=master:39347 connecting to ZooKeeper ensemble=127.0.0.1:62057 2023-06-05 17:55:03,038 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:393470x0, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-05 17:55:03,040 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:39347-0x101bc68d0f20000 connected 2023-06-05 17:55:03,060 DEBUG [Listener at localhost.localdomain/38071] zookeeper.ZKUtil(164): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-05 17:55:03,061 DEBUG [Listener at localhost.localdomain/38071] zookeeper.ZKUtil(164): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:55:03,061 DEBUG [Listener at localhost.localdomain/38071] zookeeper.ZKUtil(164): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-05 17:55:03,062 DEBUG [Listener at localhost.localdomain/38071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39347 2023-06-05 17:55:03,063 DEBUG [Listener at localhost.localdomain/38071] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39347 2023-06-05 17:55:03,063 DEBUG [Listener at localhost.localdomain/38071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39347 2023-06-05 17:55:03,063 DEBUG [Listener at localhost.localdomain/38071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39347 2023-06-05 17:55:03,064 DEBUG [Listener at localhost.localdomain/38071] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39347 2023-06-05 17:55:03,064 INFO [Listener at localhost.localdomain/38071] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66, hbase.cluster.distributed=false 2023-06-05 17:55:03,074 INFO [Listener at localhost.localdomain/38071] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-06-05 17:55:03,075 INFO [Listener at localhost.localdomain/38071] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:55:03,075 INFO [Listener at localhost.localdomain/38071] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-05 17:55:03,075 INFO [Listener at localhost.localdomain/38071] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-05 17:55:03,075 INFO [Listener at localhost.localdomain/38071] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:55:03,075 INFO [Listener at localhost.localdomain/38071] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-05 17:55:03,075 INFO [Listener at localhost.localdomain/38071] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-05 17:55:03,078 INFO [Listener at localhost.localdomain/38071] ipc.NettyRpcServer(120): Bind to /148.251.75.209:42693 2023-06-05 17:55:03,078 INFO [Listener at localhost.localdomain/38071] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-05 17:55:03,083 DEBUG [Listener at localhost.localdomain/38071] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-05 17:55:03,083 INFO [Listener at localhost.localdomain/38071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:55:03,085 INFO [Listener at localhost.localdomain/38071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:55:03,086 INFO [Listener at localhost.localdomain/38071] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42693 connecting to ZooKeeper ensemble=127.0.0.1:62057 2023-06-05 17:55:03,089 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): regionserver:426930x0, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-05 17:55:03,091 DEBUG [Listener at localhost.localdomain/38071] zookeeper.ZKUtil(164): regionserver:426930x0, quorum=127.0.0.1:62057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-05 17:55:03,091 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42693-0x101bc68d0f20001 connected 2023-06-05 17:55:03,092 DEBUG [Listener at localhost.localdomain/38071] zookeeper.ZKUtil(164): regionserver:42693-0x101bc68d0f20001, quorum=127.0.0.1:62057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:55:03,092 DEBUG [Listener at localhost.localdomain/38071] zookeeper.ZKUtil(164): regionserver:42693-0x101bc68d0f20001, quorum=127.0.0.1:62057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-05 17:55:03,093 DEBUG [Listener at localhost.localdomain/38071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42693 2023-06-05 17:55:03,093 DEBUG [Listener at localhost.localdomain/38071] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42693 2023-06-05 17:55:03,094 DEBUG [Listener at localhost.localdomain/38071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42693 2023-06-05 17:55:03,097 DEBUG [Listener at localhost.localdomain/38071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42693 2023-06-05 17:55:03,097 DEBUG [Listener at localhost.localdomain/38071] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42693 2023-06-05 17:55:03,101 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,39347,1685987703028 2023-06-05 17:55:03,129 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-05 17:55:03,129 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,39347,1685987703028 2023-06-05 17:55:03,130 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): regionserver:42693-0x101bc68d0f20001, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-05 17:55:03,131 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-05 17:55:03,131 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:55:03,132 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-05 17:55:03,132 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-05 17:55:03,132 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,39347,1685987703028 from backup master directory 2023-06-05 17:55:03,133 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,39347,1685987703028 2023-06-05 17:55:03,133 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-05 17:55:03,133 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-05 17:55:03,133 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,39347,1685987703028 2023-06-05 17:55:03,157 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/hbase.id with ID: 2a9977b0-3839-4420-a20c-98b3b87dbf9f 2023-06-05 17:55:03,178 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:55:03,191 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:55:03,211 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x45958e40 to 127.0.0.1:62057 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-05 17:55:03,217 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@444697ee, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-05 17:55:03,217 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-05 17:55:03,218 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-05 17:55:03,222 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-05 17:55:03,224 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/MasterData/data/master/store-tmp 2023-06-05 17:55:03,642 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:55:03,642 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-05 17:55:03,642 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:55:03,642 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:55:03,642 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-05 17:55:03,642 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:55:03,642 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:55:03,643 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-05 17:55:03,644 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/MasterData/WALs/jenkins-hbase20.apache.org,39347,1685987703028 2023-06-05 17:55:03,647 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C39347%2C1685987703028, suffix=, logDir=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/MasterData/WALs/jenkins-hbase20.apache.org,39347,1685987703028, archiveDir=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/MasterData/oldWALs, maxLogs=10 2023-06-05 17:55:03,667 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/MasterData/WALs/jenkins-hbase20.apache.org,39347,1685987703028/jenkins-hbase20.apache.org%2C39347%2C1685987703028.1685987703647 2023-06-05 17:55:03,668 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43909,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK], DatanodeInfoWithStorage[127.0.0.1:45237,DS-2c4df25d-980e-464d-adb8-91c953cf91d6,DISK]] 2023-06-05 17:55:03,668 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-05 17:55:03,668 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:55:03,668 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:55:03,668 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:55:03,671 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:55:03,673 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-05 17:55:03,673 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-05 17:55:03,674 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:55:03,675 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:55:03,676 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:55:03,679 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:55:03,685 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-05 17:55:03,685 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=812286, jitterRate=0.0328756719827652}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-05 17:55:03,686 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-05 17:55:03,693 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-05 17:55:03,695 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-05 17:55:03,695 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-05 17:55:03,696 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-05 17:55:03,697 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-05 17:55:03,697 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-05 17:55:03,697 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-05 17:55:03,698 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-05 17:55:03,699 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-05 17:55:03,714 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-05 17:55:03,714 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-05 17:55:03,715 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-05 17:55:03,715 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-05 17:55:03,716 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-05 17:55:03,720 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:55:03,720 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-05 17:55:03,721 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-05 17:55:03,722 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-05 17:55:03,724 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): regionserver:42693-0x101bc68d0f20001, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-05 17:55:03,724 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-05 17:55:03,725 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:55:03,725 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,39347,1685987703028, sessionid=0x101bc68d0f20000, setting cluster-up flag (Was=false) 2023-06-05 17:55:03,731 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:55:03,734 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-05 17:55:03,736 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,39347,1685987703028 2023-06-05 17:55:03,740 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:55:03,744 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-05 17:55:03,745 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,39347,1685987703028 2023-06-05 17:55:03,746 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/.hbase-snapshot/.tmp 2023-06-05 17:55:03,757 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-05 17:55:03,758 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-05 17:55:03,758 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-05 17:55:03,758 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-05 17:55:03,758 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-05 17:55:03,758 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-06-05 17:55:03,758 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:55:03,759 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-05 17:55:03,759 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:55:03,774 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685987733774 2023-06-05 17:55:03,774 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-05 17:55:03,779 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-05 17:55:03,783 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-05 17:55:03,783 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-05 17:55:03,783 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-05 17:55:03,783 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-05 17:55:03,783 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-05 17:55:03,784 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-05 17:55:03,784 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-05 17:55:03,784 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-05 17:55:03,783 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-05 17:55:03,786 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-05 17:55:03,787 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-05 17:55:03,787 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-05 17:55:03,787 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685987703787,5,FailOnTimeoutGroup] 2023-06-05 17:55:03,787 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685987703787,5,FailOnTimeoutGroup] 2023-06-05 17:55:03,787 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-05 17:55:03,788 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-05 17:55:03,788 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-05 17:55:03,788 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-05 17:55:03,790 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-05 17:55:03,807 INFO [RS:0;jenkins-hbase20:42693] regionserver.HRegionServer(951): ClusterId : 2a9977b0-3839-4420-a20c-98b3b87dbf9f 2023-06-05 17:55:03,808 DEBUG [RS:0;jenkins-hbase20:42693] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-05 17:55:03,816 DEBUG [RS:0;jenkins-hbase20:42693] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-05 17:55:03,817 DEBUG [RS:0;jenkins-hbase20:42693] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-05 17:55:03,826 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-05 17:55:03,827 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-05 17:55:03,828 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66 2023-06-05 17:55:03,829 DEBUG [RS:0;jenkins-hbase20:42693] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-05 17:55:03,832 DEBUG [RS:0;jenkins-hbase20:42693] zookeeper.ReadOnlyZKClient(139): Connect 0x5b8938a8 to 127.0.0.1:62057 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-05 17:55:03,844 DEBUG [RS:0;jenkins-hbase20:42693] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5fb6b701, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-05 17:55:03,845 DEBUG [RS:0;jenkins-hbase20:42693] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1ab43293, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-05 17:55:03,847 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:55:03,849 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-05 17:55:03,851 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/hbase/meta/1588230740/info 2023-06-05 17:55:03,851 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-05 17:55:03,852 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:55:03,852 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-05 17:55:03,854 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/hbase/meta/1588230740/rep_barrier 2023-06-05 17:55:03,854 DEBUG [RS:0;jenkins-hbase20:42693] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:42693 2023-06-05 17:55:03,854 INFO [RS:0;jenkins-hbase20:42693] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-05 17:55:03,854 INFO [RS:0;jenkins-hbase20:42693] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-05 17:55:03,854 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-05 17:55:03,854 DEBUG [RS:0;jenkins-hbase20:42693] regionserver.HRegionServer(1022): About to register with Master. 2023-06-05 17:55:03,855 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:55:03,855 INFO [RS:0;jenkins-hbase20:42693] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,39347,1685987703028 with isa=jenkins-hbase20.apache.org/148.251.75.209:42693, startcode=1685987703074 2023-06-05 17:55:03,855 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-05 17:55:03,855 DEBUG [RS:0;jenkins-hbase20:42693] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-05 17:55:03,857 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/hbase/meta/1588230740/table 2023-06-05 17:55:03,862 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-05 17:55:03,864 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:36631, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-06-05 17:55:03,864 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:55:03,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39347] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,42693,1685987703074 2023-06-05 17:55:03,866 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/hbase/meta/1588230740 2023-06-05 17:55:03,867 DEBUG [RS:0;jenkins-hbase20:42693] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66 2023-06-05 17:55:03,867 DEBUG [RS:0;jenkins-hbase20:42693] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:44149 2023-06-05 17:55:03,867 DEBUG [RS:0;jenkins-hbase20:42693] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-05 17:55:03,867 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/hbase/meta/1588230740 2023-06-05 17:55:03,868 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-05 17:55:03,869 DEBUG [RS:0;jenkins-hbase20:42693] zookeeper.ZKUtil(162): regionserver:42693-0x101bc68d0f20001, quorum=127.0.0.1:62057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,42693,1685987703074 2023-06-05 17:55:03,869 WARN [RS:0;jenkins-hbase20:42693] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-05 17:55:03,869 INFO [RS:0;jenkins-hbase20:42693] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-05 17:55:03,870 DEBUG [RS:0;jenkins-hbase20:42693] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074 2023-06-05 17:55:03,870 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,42693,1685987703074] 2023-06-05 17:55:03,872 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-05 17:55:03,874 DEBUG [RS:0;jenkins-hbase20:42693] zookeeper.ZKUtil(162): regionserver:42693-0x101bc68d0f20001, quorum=127.0.0.1:62057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,42693,1685987703074 2023-06-05 17:55:03,874 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-05 17:55:03,875 DEBUG [RS:0;jenkins-hbase20:42693] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-05 17:55:03,875 INFO [RS:0;jenkins-hbase20:42693] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-05 17:55:03,876 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-05 17:55:03,877 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=877526, jitterRate=0.11583290994167328}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-05 17:55:03,877 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-05 17:55:03,877 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-05 17:55:03,877 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-05 17:55:03,877 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-05 17:55:03,877 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-05 17:55:03,877 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-05 17:55:03,878 INFO [RS:0;jenkins-hbase20:42693] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-05 17:55:03,878 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-05 17:55:03,878 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-05 17:55:03,878 INFO [RS:0;jenkins-hbase20:42693] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-05 17:55:03,878 INFO [RS:0;jenkins-hbase20:42693] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-05 17:55:03,878 INFO [RS:0;jenkins-hbase20:42693] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-05 17:55:03,880 INFO [RS:0;jenkins-hbase20:42693] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-05 17:55:03,880 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-05 17:55:03,881 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-05 17:55:03,881 DEBUG [RS:0;jenkins-hbase20:42693] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:55:03,881 DEBUG [RS:0;jenkins-hbase20:42693] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:55:03,881 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-05 17:55:03,881 DEBUG [RS:0;jenkins-hbase20:42693] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:55:03,881 DEBUG [RS:0;jenkins-hbase20:42693] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:55:03,881 DEBUG [RS:0;jenkins-hbase20:42693] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:55:03,881 DEBUG [RS:0;jenkins-hbase20:42693] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-05 17:55:03,881 DEBUG [RS:0;jenkins-hbase20:42693] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:55:03,881 DEBUG [RS:0;jenkins-hbase20:42693] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:55:03,882 DEBUG [RS:0;jenkins-hbase20:42693] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:55:03,882 DEBUG [RS:0;jenkins-hbase20:42693] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:55:03,883 INFO [RS:0;jenkins-hbase20:42693] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-05 17:55:03,883 INFO [RS:0;jenkins-hbase20:42693] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-05 17:55:03,883 INFO [RS:0;jenkins-hbase20:42693] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-05 17:55:03,884 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-05 17:55:03,885 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-05 17:55:03,893 INFO [RS:0;jenkins-hbase20:42693] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-05 17:55:03,894 INFO [RS:0;jenkins-hbase20:42693] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,42693,1685987703074-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-05 17:55:03,908 INFO [RS:0;jenkins-hbase20:42693] regionserver.Replication(203): jenkins-hbase20.apache.org,42693,1685987703074 started 2023-06-05 17:55:03,909 INFO [RS:0;jenkins-hbase20:42693] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,42693,1685987703074, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:42693, sessionid=0x101bc68d0f20001 2023-06-05 17:55:03,909 DEBUG [RS:0;jenkins-hbase20:42693] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-05 17:55:03,909 DEBUG [RS:0;jenkins-hbase20:42693] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,42693,1685987703074 2023-06-05 17:55:03,909 DEBUG [RS:0;jenkins-hbase20:42693] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,42693,1685987703074' 2023-06-05 17:55:03,909 DEBUG [RS:0;jenkins-hbase20:42693] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-05 17:55:03,910 DEBUG [RS:0;jenkins-hbase20:42693] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-05 17:55:03,910 DEBUG [RS:0;jenkins-hbase20:42693] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-05 17:55:03,910 DEBUG [RS:0;jenkins-hbase20:42693] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-05 17:55:03,910 DEBUG [RS:0;jenkins-hbase20:42693] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,42693,1685987703074 2023-06-05 17:55:03,910 DEBUG [RS:0;jenkins-hbase20:42693] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,42693,1685987703074' 2023-06-05 17:55:03,910 DEBUG [RS:0;jenkins-hbase20:42693] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-05 17:55:03,911 DEBUG [RS:0;jenkins-hbase20:42693] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-05 17:55:03,911 DEBUG [RS:0;jenkins-hbase20:42693] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-05 17:55:03,911 INFO [RS:0;jenkins-hbase20:42693] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-05 17:55:03,911 INFO [RS:0;jenkins-hbase20:42693] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-05 17:55:04,013 INFO [RS:0;jenkins-hbase20:42693] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C42693%2C1685987703074, suffix=, logDir=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074, archiveDir=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/oldWALs, maxLogs=32 2023-06-05 17:55:04,023 INFO [RS:0;jenkins-hbase20:42693] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987704014 2023-06-05 17:55:04,023 DEBUG [RS:0;jenkins-hbase20:42693] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45237,DS-2c4df25d-980e-464d-adb8-91c953cf91d6,DISK], DatanodeInfoWithStorage[127.0.0.1:43909,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK]] 2023-06-05 17:55:04,035 DEBUG [jenkins-hbase20:39347] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-05 17:55:04,036 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,42693,1685987703074, state=OPENING 2023-06-05 17:55:04,037 DEBUG [PEWorker-2] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-05 17:55:04,038 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:55:04,039 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,42693,1685987703074}] 2023-06-05 17:55:04,039 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-05 17:55:04,194 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,42693,1685987703074 2023-06-05 17:55:04,194 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-05 17:55:04,196 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:46988, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-05 17:55:04,201 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-05 17:55:04,201 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-05 17:55:04,203 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C42693%2C1685987703074.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074, archiveDir=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/oldWALs, maxLogs=32 2023-06-05 17:55:04,231 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.meta.1685987704210.meta 2023-06-05 17:55:04,231 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45237,DS-2c4df25d-980e-464d-adb8-91c953cf91d6,DISK], DatanodeInfoWithStorage[127.0.0.1:43909,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK]] 2023-06-05 17:55:04,232 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-05 17:55:04,232 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-05 17:55:04,232 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-05 17:55:04,232 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-05 17:55:04,233 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-05 17:55:04,233 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:55:04,233 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-05 17:55:04,233 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-05 17:55:04,238 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-05 17:55:04,240 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/hbase/meta/1588230740/info 2023-06-05 17:55:04,240 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/hbase/meta/1588230740/info 2023-06-05 17:55:04,240 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-05 17:55:04,241 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:55:04,242 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-05 17:55:04,243 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/hbase/meta/1588230740/rep_barrier 2023-06-05 17:55:04,243 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/hbase/meta/1588230740/rep_barrier 2023-06-05 17:55:04,243 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-05 17:55:04,244 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:55:04,244 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-05 17:55:04,245 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/hbase/meta/1588230740/table 2023-06-05 17:55:04,245 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/hbase/meta/1588230740/table 2023-06-05 17:55:04,246 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-05 17:55:04,247 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:55:04,247 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/hbase/meta/1588230740 2023-06-05 17:55:04,249 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/hbase/meta/1588230740 2023-06-05 17:55:04,251 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-05 17:55:04,253 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-05 17:55:04,254 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=816937, jitterRate=0.03879009187221527}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-05 17:55:04,254 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-05 17:55:04,256 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685987704193 2023-06-05 17:55:04,262 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-05 17:55:04,263 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-05 17:55:04,264 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,42693,1685987703074, state=OPEN 2023-06-05 17:55:04,265 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-05 17:55:04,265 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-05 17:55:04,275 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-05 17:55:04,275 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,42693,1685987703074 in 226 msec 2023-06-05 17:55:04,278 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-05 17:55:04,279 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 394 msec 2023-06-05 17:55:04,282 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 524 msec 2023-06-05 17:55:04,282 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685987704282, completionTime=-1 2023-06-05 17:55:04,282 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-05 17:55:04,282 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-05 17:55:04,285 DEBUG [hconnection-0x48c75d1b-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-05 17:55:04,287 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:46998, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-05 17:55:04,289 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-05 17:55:04,289 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685987764289 2023-06-05 17:55:04,289 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685987824289 2023-06-05 17:55:04,289 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 7 msec 2023-06-05 17:55:04,295 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39347,1685987703028-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-05 17:55:04,296 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39347,1685987703028-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-05 17:55:04,296 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39347,1685987703028-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-05 17:55:04,296 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:39347, period=300000, unit=MILLISECONDS is enabled. 2023-06-05 17:55:04,296 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-05 17:55:04,296 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-05 17:55:04,297 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-05 17:55:04,298 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-05 17:55:04,299 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-05 17:55:04,301 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-05 17:55:04,302 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-05 17:55:04,306 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/.tmp/data/hbase/namespace/215aae13234348d9e24b41b6b6aaf76f 2023-06-05 17:55:04,307 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/.tmp/data/hbase/namespace/215aae13234348d9e24b41b6b6aaf76f empty. 2023-06-05 17:55:04,307 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/.tmp/data/hbase/namespace/215aae13234348d9e24b41b6b6aaf76f 2023-06-05 17:55:04,307 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-05 17:55:04,323 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-05 17:55:04,325 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 215aae13234348d9e24b41b6b6aaf76f, NAME => 'hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/.tmp 2023-06-05 17:55:04,336 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:55:04,337 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 215aae13234348d9e24b41b6b6aaf76f, disabling compactions & flushes 2023-06-05 17:55:04,337 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f. 2023-06-05 17:55:04,337 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f. 2023-06-05 17:55:04,337 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f. after waiting 0 ms 2023-06-05 17:55:04,337 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f. 2023-06-05 17:55:04,337 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f. 2023-06-05 17:55:04,337 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 215aae13234348d9e24b41b6b6aaf76f: 2023-06-05 17:55:04,340 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-05 17:55:04,341 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685987704341"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685987704341"}]},"ts":"1685987704341"} 2023-06-05 17:55:04,344 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-05 17:55:04,345 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-05 17:55:04,345 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685987704345"}]},"ts":"1685987704345"} 2023-06-05 17:55:04,347 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-05 17:55:04,351 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=215aae13234348d9e24b41b6b6aaf76f, ASSIGN}] 2023-06-05 17:55:04,353 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=215aae13234348d9e24b41b6b6aaf76f, ASSIGN 2023-06-05 17:55:04,355 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=215aae13234348d9e24b41b6b6aaf76f, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,42693,1685987703074; forceNewPlan=false, retain=false 2023-06-05 17:55:04,506 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=215aae13234348d9e24b41b6b6aaf76f, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,42693,1685987703074 2023-06-05 17:55:04,506 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685987704506"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685987704506"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685987704506"}]},"ts":"1685987704506"} 2023-06-05 17:55:04,508 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 215aae13234348d9e24b41b6b6aaf76f, server=jenkins-hbase20.apache.org,42693,1685987703074}] 2023-06-05 17:55:04,665 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f. 2023-06-05 17:55:04,665 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 215aae13234348d9e24b41b6b6aaf76f, NAME => 'hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f.', STARTKEY => '', ENDKEY => ''} 2023-06-05 17:55:04,665 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 215aae13234348d9e24b41b6b6aaf76f 2023-06-05 17:55:04,665 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:55:04,665 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 215aae13234348d9e24b41b6b6aaf76f 2023-06-05 17:55:04,665 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 215aae13234348d9e24b41b6b6aaf76f 2023-06-05 17:55:04,667 INFO [StoreOpener-215aae13234348d9e24b41b6b6aaf76f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 215aae13234348d9e24b41b6b6aaf76f 2023-06-05 17:55:04,668 DEBUG [StoreOpener-215aae13234348d9e24b41b6b6aaf76f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/hbase/namespace/215aae13234348d9e24b41b6b6aaf76f/info 2023-06-05 17:55:04,668 DEBUG [StoreOpener-215aae13234348d9e24b41b6b6aaf76f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/hbase/namespace/215aae13234348d9e24b41b6b6aaf76f/info 2023-06-05 17:55:04,669 INFO [StoreOpener-215aae13234348d9e24b41b6b6aaf76f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 215aae13234348d9e24b41b6b6aaf76f columnFamilyName info 2023-06-05 17:55:04,669 INFO [StoreOpener-215aae13234348d9e24b41b6b6aaf76f-1] regionserver.HStore(310): Store=215aae13234348d9e24b41b6b6aaf76f/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:55:04,670 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/hbase/namespace/215aae13234348d9e24b41b6b6aaf76f 2023-06-05 17:55:04,670 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/hbase/namespace/215aae13234348d9e24b41b6b6aaf76f 2023-06-05 17:55:04,673 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 215aae13234348d9e24b41b6b6aaf76f 2023-06-05 17:55:04,674 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/hbase/namespace/215aae13234348d9e24b41b6b6aaf76f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-05 17:55:04,675 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 215aae13234348d9e24b41b6b6aaf76f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=805891, jitterRate=0.024743959307670593}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-05 17:55:04,675 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 215aae13234348d9e24b41b6b6aaf76f: 2023-06-05 17:55:04,678 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f., pid=6, masterSystemTime=1685987704661 2023-06-05 17:55:04,681 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f. 2023-06-05 17:55:04,681 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f. 2023-06-05 17:55:04,681 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=215aae13234348d9e24b41b6b6aaf76f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,42693,1685987703074 2023-06-05 17:55:04,682 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685987704681"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685987704681"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685987704681"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685987704681"}]},"ts":"1685987704681"} 2023-06-05 17:55:04,686 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-05 17:55:04,686 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 215aae13234348d9e24b41b6b6aaf76f, server=jenkins-hbase20.apache.org,42693,1685987703074 in 176 msec 2023-06-05 17:55:04,689 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-05 17:55:04,689 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=215aae13234348d9e24b41b6b6aaf76f, ASSIGN in 335 msec 2023-06-05 17:55:04,689 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-05 17:55:04,690 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685987704689"}]},"ts":"1685987704689"} 2023-06-05 17:55:04,691 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-05 17:55:04,693 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-05 17:55:04,695 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 396 msec 2023-06-05 17:55:04,700 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-05 17:55:04,704 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-05 17:55:04,704 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:55:04,708 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-05 17:55:04,718 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-05 17:55:04,722 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 14 msec 2023-06-05 17:55:04,731 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-05 17:55:04,740 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-05 17:55:04,744 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-06-05 17:55:04,756 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-05 17:55:04,757 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-05 17:55:04,757 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.624sec 2023-06-05 17:55:04,757 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-05 17:55:04,758 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-05 17:55:04,758 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-05 17:55:04,758 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39347,1685987703028-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-05 17:55:04,758 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39347,1685987703028-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-05 17:55:04,760 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-05 17:55:04,808 DEBUG [Listener at localhost.localdomain/38071] zookeeper.ReadOnlyZKClient(139): Connect 0x1f943fba to 127.0.0.1:62057 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-05 17:55:04,812 DEBUG [Listener at localhost.localdomain/38071] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@428a0770, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-05 17:55:04,814 DEBUG [hconnection-0xda1bce2-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-05 17:55:04,816 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:47010, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-05 17:55:04,818 INFO [Listener at localhost.localdomain/38071] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,39347,1685987703028 2023-06-05 17:55:04,818 INFO [Listener at localhost.localdomain/38071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:55:04,829 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-05 17:55:04,829 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:55:04,830 INFO [Listener at localhost.localdomain/38071] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-05 17:55:04,830 INFO [Listener at localhost.localdomain/38071] wal.TestLogRolling(429): Starting testLogRollOnPipelineRestart 2023-06-05 17:55:04,830 INFO [Listener at localhost.localdomain/38071] wal.TestLogRolling(432): Replication=2 2023-06-05 17:55:04,833 DEBUG [Listener at localhost.localdomain/38071] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-05 17:55:04,837 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:55008, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-05 17:55:04,838 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39347] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-05 17:55:04,839 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39347] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-05 17:55:04,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39347] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-05 17:55:04,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39347] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart 2023-06-05 17:55:04,843 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_PRE_OPERATION 2023-06-05 17:55:04,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39347] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnPipelineRestart" procId is: 9 2023-06-05 17:55:04,845 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-05 17:55:04,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39347] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-05 17:55:04,847 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/538d0e4b8418d489a83d6c683fd46ff3 2023-06-05 17:55:04,847 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/538d0e4b8418d489a83d6c683fd46ff3 empty. 2023-06-05 17:55:04,848 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/538d0e4b8418d489a83d6c683fd46ff3 2023-06-05 17:55:04,848 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnPipelineRestart regions 2023-06-05 17:55:04,862 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/.tabledesc/.tableinfo.0000000001 2023-06-05 17:55:04,863 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(7675): creating {ENCODED => 538d0e4b8418d489a83d6c683fd46ff3, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/.tmp 2023-06-05 17:55:04,876 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:55:04,876 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1604): Closing 538d0e4b8418d489a83d6c683fd46ff3, disabling compactions & flushes 2023-06-05 17:55:04,876 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3. 2023-06-05 17:55:04,876 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3. 2023-06-05 17:55:04,876 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3. after waiting 0 ms 2023-06-05 17:55:04,877 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3. 2023-06-05 17:55:04,877 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3. 2023-06-05 17:55:04,877 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1558): Region close journal for 538d0e4b8418d489a83d6c683fd46ff3: 2023-06-05 17:55:04,879 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ADD_TO_META 2023-06-05 17:55:04,880 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685987704880"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685987704880"}]},"ts":"1685987704880"} 2023-06-05 17:55:04,882 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-05 17:55:04,883 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-05 17:55:04,883 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685987704883"}]},"ts":"1685987704883"} 2023-06-05 17:55:04,885 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLING in hbase:meta 2023-06-05 17:55:04,888 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=538d0e4b8418d489a83d6c683fd46ff3, ASSIGN}] 2023-06-05 17:55:04,890 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=538d0e4b8418d489a83d6c683fd46ff3, ASSIGN 2023-06-05 17:55:04,891 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=538d0e4b8418d489a83d6c683fd46ff3, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,42693,1685987703074; forceNewPlan=false, retain=false 2023-06-05 17:55:05,043 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=538d0e4b8418d489a83d6c683fd46ff3, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,42693,1685987703074 2023-06-05 17:55:05,044 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685987705043"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685987705043"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685987705043"}]},"ts":"1685987705043"} 2023-06-05 17:55:05,049 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 538d0e4b8418d489a83d6c683fd46ff3, server=jenkins-hbase20.apache.org,42693,1685987703074}] 2023-06-05 17:55:05,208 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3. 2023-06-05 17:55:05,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 538d0e4b8418d489a83d6c683fd46ff3, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3.', STARTKEY => '', ENDKEY => ''} 2023-06-05 17:55:05,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnPipelineRestart 538d0e4b8418d489a83d6c683fd46ff3 2023-06-05 17:55:05,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:55:05,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 538d0e4b8418d489a83d6c683fd46ff3 2023-06-05 17:55:05,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 538d0e4b8418d489a83d6c683fd46ff3 2023-06-05 17:55:05,211 INFO [StoreOpener-538d0e4b8418d489a83d6c683fd46ff3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 538d0e4b8418d489a83d6c683fd46ff3 2023-06-05 17:55:05,213 DEBUG [StoreOpener-538d0e4b8418d489a83d6c683fd46ff3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/default/TestLogRolling-testLogRollOnPipelineRestart/538d0e4b8418d489a83d6c683fd46ff3/info 2023-06-05 17:55:05,213 DEBUG [StoreOpener-538d0e4b8418d489a83d6c683fd46ff3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/default/TestLogRolling-testLogRollOnPipelineRestart/538d0e4b8418d489a83d6c683fd46ff3/info 2023-06-05 17:55:05,214 INFO [StoreOpener-538d0e4b8418d489a83d6c683fd46ff3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 538d0e4b8418d489a83d6c683fd46ff3 columnFamilyName info 2023-06-05 17:55:05,215 INFO [StoreOpener-538d0e4b8418d489a83d6c683fd46ff3-1] regionserver.HStore(310): Store=538d0e4b8418d489a83d6c683fd46ff3/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:55:05,216 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/default/TestLogRolling-testLogRollOnPipelineRestart/538d0e4b8418d489a83d6c683fd46ff3 2023-06-05 17:55:05,216 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/default/TestLogRolling-testLogRollOnPipelineRestart/538d0e4b8418d489a83d6c683fd46ff3 2023-06-05 17:55:05,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 538d0e4b8418d489a83d6c683fd46ff3 2023-06-05 17:55:05,223 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/data/default/TestLogRolling-testLogRollOnPipelineRestart/538d0e4b8418d489a83d6c683fd46ff3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-05 17:55:05,224 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 538d0e4b8418d489a83d6c683fd46ff3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=734708, jitterRate=-0.06577078998088837}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-05 17:55:05,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 538d0e4b8418d489a83d6c683fd46ff3: 2023-06-05 17:55:05,226 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3., pid=11, masterSystemTime=1685987705203 2023-06-05 17:55:05,228 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3. 2023-06-05 17:55:05,228 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3. 2023-06-05 17:55:05,229 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=538d0e4b8418d489a83d6c683fd46ff3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,42693,1685987703074 2023-06-05 17:55:05,229 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685987705229"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685987705229"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685987705229"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685987705229"}]},"ts":"1685987705229"} 2023-06-05 17:55:05,236 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-05 17:55:05,236 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 538d0e4b8418d489a83d6c683fd46ff3, server=jenkins-hbase20.apache.org,42693,1685987703074 in 183 msec 2023-06-05 17:55:05,239 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-05 17:55:05,239 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=538d0e4b8418d489a83d6c683fd46ff3, ASSIGN in 348 msec 2023-06-05 17:55:05,240 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-05 17:55:05,241 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685987705241"}]},"ts":"1685987705241"} 2023-06-05 17:55:05,243 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLED in hbase:meta 2023-06-05 17:55:05,245 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_POST_OPERATION 2023-06-05 17:55:05,248 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart in 406 msec 2023-06-05 17:55:06,962 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-05 17:55:09,875 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnPipelineRestart' 2023-06-05 17:55:14,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39347] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-05 17:55:14,847 INFO [Listener at localhost.localdomain/38071] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnPipelineRestart, procId: 9 completed 2023-06-05 17:55:14,851 DEBUG [Listener at localhost.localdomain/38071] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnPipelineRestart 2023-06-05 17:55:14,852 DEBUG [Listener at localhost.localdomain/38071] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3. 2023-06-05 17:55:16,861 INFO [Listener at localhost.localdomain/38071] wal.TestLogRolling(469): log.getCurrentFileName()): hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987704014 2023-06-05 17:55:16,862 WARN [Listener at localhost.localdomain/38071] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-05 17:55:16,864 WARN [ResponseProcessor for block BP-2107108549-148.251.75.209-1685987702398:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-2107108549-148.251.75.209-1685987702398:blk_1073741832_1008 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-05 17:55:16,865 WARN [ResponseProcessor for block BP-2107108549-148.251.75.209-1685987702398:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-2107108549-148.251.75.209-1685987702398:blk_1073741829_1005 java.io.IOException: Bad response ERROR for BP-2107108549-148.251.75.209-1685987702398:blk_1073741829_1005 from datanode DatanodeInfoWithStorage[127.0.0.1:45237,DS-2c4df25d-980e-464d-adb8-91c953cf91d6,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-05 17:55:16,864 WARN [ResponseProcessor for block BP-2107108549-148.251.75.209-1685987702398:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-2107108549-148.251.75.209-1685987702398:blk_1073741833_1009 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-05 17:55:16,865 WARN [DataStreamer for file /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/MasterData/WALs/jenkins-hbase20.apache.org,39347,1685987703028/jenkins-hbase20.apache.org%2C39347%2C1685987703028.1685987703647 block BP-2107108549-148.251.75.209-1685987702398:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-2107108549-148.251.75.209-1685987702398:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:43909,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK], DatanodeInfoWithStorage[127.0.0.1:45237,DS-2c4df25d-980e-464d-adb8-91c953cf91d6,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:45237,DS-2c4df25d-980e-464d-adb8-91c953cf91d6,DISK]) is bad. 2023-06-05 17:55:16,865 WARN [DataStreamer for file /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987704014 block BP-2107108549-148.251.75.209-1685987702398:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-2107108549-148.251.75.209-1685987702398:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:45237,DS-2c4df25d-980e-464d-adb8-91c953cf91d6,DISK], DatanodeInfoWithStorage[127.0.0.1:43909,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:45237,DS-2c4df25d-980e-464d-adb8-91c953cf91d6,DISK]) is bad. 2023-06-05 17:55:16,865 WARN [PacketResponder: BP-2107108549-148.251.75.209-1685987702398:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:45237]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:16,866 WARN [DataStreamer for file /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.meta.1685987704210.meta block BP-2107108549-148.251.75.209-1685987702398:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-2107108549-148.251.75.209-1685987702398:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:45237,DS-2c4df25d-980e-464d-adb8-91c953cf91d6,DISK], DatanodeInfoWithStorage[127.0.0.1:43909,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:45237,DS-2c4df25d-980e-464d-adb8-91c953cf91d6,DISK]) is bad. 2023-06-05 17:55:16,866 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_63746643_17 at /127.0.0.1:52868 [Receiving block BP-2107108549-148.251.75.209-1685987702398:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:43909:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52868 dst: /127.0.0.1:43909 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:16,870 INFO [Listener at localhost.localdomain/38071] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-05 17:55:16,872 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_108235545_17 at /127.0.0.1:52904 [Receiving block BP-2107108549-148.251.75.209-1685987702398:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:43909:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52904 dst: /127.0.0.1:43909 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:43909 remote=/127.0.0.1:52904]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:16,873 WARN [PacketResponder: BP-2107108549-148.251.75.209-1685987702398:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:43909]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:16,874 WARN [PacketResponder: BP-2107108549-148.251.75.209-1685987702398:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:43909]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:16,873 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_108235545_17 at /127.0.0.1:52914 [Receiving block BP-2107108549-148.251.75.209-1685987702398:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:43909:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52914 dst: /127.0.0.1:43909 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:43909 remote=/127.0.0.1:52914]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:16,877 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_108235545_17 at /127.0.0.1:42160 [Receiving block BP-2107108549-148.251.75.209-1685987702398:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:45237:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42160 dst: /127.0.0.1:45237 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:16,880 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_108235545_17 at /127.0.0.1:42164 [Receiving block BP-2107108549-148.251.75.209-1685987702398:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:45237:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42164 dst: /127.0.0.1:45237 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:16,879 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_63746643_17 at /127.0.0.1:42132 [Receiving block BP-2107108549-148.251.75.209-1685987702398:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:45237:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42132 dst: /127.0.0.1:45237 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:16,881 WARN [BP-2107108549-148.251.75.209-1685987702398 heartbeating to localhost.localdomain/127.0.0.1:44149] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-05 17:55:16,881 WARN [BP-2107108549-148.251.75.209-1685987702398 heartbeating to localhost.localdomain/127.0.0.1:44149] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2107108549-148.251.75.209-1685987702398 (Datanode Uuid f86d8251-b914-47c8-b35d-f50ac8b70d7f) service to localhost.localdomain/127.0.0.1:44149 2023-06-05 17:55:16,887 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/cluster_abce875d-ad65-ad0a-f24c-ab0c5aece5bd/dfs/data/data3/current/BP-2107108549-148.251.75.209-1685987702398] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:55:16,887 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/cluster_abce875d-ad65-ad0a-f24c-ab0c5aece5bd/dfs/data/data4/current/BP-2107108549-148.251.75.209-1685987702398] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:55:16,903 WARN [Listener at localhost.localdomain/38071] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-05 17:55:16,905 WARN [Listener at localhost.localdomain/38071] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:55:16,906 INFO [Listener at localhost.localdomain/38071] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:55:16,911 INFO [Listener at localhost.localdomain/38071] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/java.io.tmpdir/Jetty_localhost_37751_datanode____.biaesq/webapp 2023-06-05 17:55:16,984 INFO [Listener at localhost.localdomain/38071] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37751 2023-06-05 17:55:16,990 WARN [Listener at localhost.localdomain/36659] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:55:16,995 WARN [Listener at localhost.localdomain/36659] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-05 17:55:16,995 WARN [ResponseProcessor for block BP-2107108549-148.251.75.209-1685987702398:blk_1073741833_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-2107108549-148.251.75.209-1685987702398:blk_1073741833_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-05 17:55:16,995 WARN [ResponseProcessor for block BP-2107108549-148.251.75.209-1685987702398:blk_1073741829_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-2107108549-148.251.75.209-1685987702398:blk_1073741829_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-05 17:55:16,995 WARN [ResponseProcessor for block BP-2107108549-148.251.75.209-1685987702398:blk_1073741832_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-2107108549-148.251.75.209-1685987702398:blk_1073741832_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-05 17:55:16,999 INFO [Listener at localhost.localdomain/36659] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-05 17:55:17,042 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6ebf63d4972d2fc5: Processing first storage report for DS-2c4df25d-980e-464d-adb8-91c953cf91d6 from datanode f86d8251-b914-47c8-b35d-f50ac8b70d7f 2023-06-05 17:55:17,042 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6ebf63d4972d2fc5: from storage DS-2c4df25d-980e-464d-adb8-91c953cf91d6 node DatanodeRegistration(127.0.0.1:35095, datanodeUuid=f86d8251-b914-47c8-b35d-f50ac8b70d7f, infoPort=33939, infoSecurePort=0, ipcPort=36659, storageInfo=lv=-57;cid=testClusterID;nsid=27364690;c=1685987702398), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:55:17,043 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6ebf63d4972d2fc5: Processing first storage report for DS-dc110e91-e793-428a-95dc-325d8550ee9b from datanode f86d8251-b914-47c8-b35d-f50ac8b70d7f 2023-06-05 17:55:17,043 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6ebf63d4972d2fc5: from storage DS-dc110e91-e793-428a-95dc-325d8550ee9b node DatanodeRegistration(127.0.0.1:35095, datanodeUuid=f86d8251-b914-47c8-b35d-f50ac8b70d7f, infoPort=33939, infoSecurePort=0, ipcPort=36659, storageInfo=lv=-57;cid=testClusterID;nsid=27364690;c=1685987702398), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:55:17,104 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_108235545_17 at /127.0.0.1:38386 [Receiving block BP-2107108549-148.251.75.209-1685987702398:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:43909:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38386 dst: /127.0.0.1:43909 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:17,108 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_108235545_17 at /127.0.0.1:38372 [Receiving block BP-2107108549-148.251.75.209-1685987702398:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:43909:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38372 dst: /127.0.0.1:43909 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:17,107 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_63746643_17 at /127.0.0.1:38356 [Receiving block BP-2107108549-148.251.75.209-1685987702398:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:43909:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38356 dst: /127.0.0.1:43909 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:17,111 WARN [BP-2107108549-148.251.75.209-1685987702398 heartbeating to localhost.localdomain/127.0.0.1:44149] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-05 17:55:17,111 WARN [BP-2107108549-148.251.75.209-1685987702398 heartbeating to localhost.localdomain/127.0.0.1:44149] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2107108549-148.251.75.209-1685987702398 (Datanode Uuid 06eb08d6-de3f-4d7c-942b-f8394d321ea0) service to localhost.localdomain/127.0.0.1:44149 2023-06-05 17:55:17,111 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/cluster_abce875d-ad65-ad0a-f24c-ab0c5aece5bd/dfs/data/data1/current/BP-2107108549-148.251.75.209-1685987702398] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:55:17,112 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/cluster_abce875d-ad65-ad0a-f24c-ab0c5aece5bd/dfs/data/data2/current/BP-2107108549-148.251.75.209-1685987702398] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:55:17,120 WARN [Listener at localhost.localdomain/36659] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-05 17:55:17,123 WARN [Listener at localhost.localdomain/36659] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:55:17,125 INFO [Listener at localhost.localdomain/36659] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:55:17,131 INFO [Listener at localhost.localdomain/36659] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/java.io.tmpdir/Jetty_localhost_40327_datanode____.xs75ul/webapp 2023-06-05 17:55:17,202 INFO [Listener at localhost.localdomain/36659] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40327 2023-06-05 17:55:17,210 WARN [Listener at localhost.localdomain/37411] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:55:17,260 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe4513b5cd8f631f9: Processing first storage report for DS-f91502fb-6ebb-4d50-8083-e223d241b5c8 from datanode 06eb08d6-de3f-4d7c-942b-f8394d321ea0 2023-06-05 17:55:17,260 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe4513b5cd8f631f9: from storage DS-f91502fb-6ebb-4d50-8083-e223d241b5c8 node DatanodeRegistration(127.0.0.1:38717, datanodeUuid=06eb08d6-de3f-4d7c-942b-f8394d321ea0, infoPort=40009, infoSecurePort=0, ipcPort=37411, storageInfo=lv=-57;cid=testClusterID;nsid=27364690;c=1685987702398), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:55:17,260 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe4513b5cd8f631f9: Processing first storage report for DS-825459d5-0650-4d60-b8e2-228314b4b274 from datanode 06eb08d6-de3f-4d7c-942b-f8394d321ea0 2023-06-05 17:55:17,261 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe4513b5cd8f631f9: from storage DS-825459d5-0650-4d60-b8e2-228314b4b274 node DatanodeRegistration(127.0.0.1:38717, datanodeUuid=06eb08d6-de3f-4d7c-942b-f8394d321ea0, infoPort=40009, infoSecurePort=0, ipcPort=37411, storageInfo=lv=-57;cid=testClusterID;nsid=27364690;c=1685987702398), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:55:18,214 INFO [Listener at localhost.localdomain/37411] wal.TestLogRolling(481): Data Nodes restarted 2023-06-05 17:55:18,215 INFO [Listener at localhost.localdomain/37411] wal.AbstractTestLogRolling(233): Validated row row1002 2023-06-05 17:55:18,216 WARN [RS:0;jenkins-hbase20:42693.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=5, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43909,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:18,217 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C42693%2C1685987703074:(num 1685987704014) roll requested 2023-06-05 17:55:18,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43909,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:18,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42693] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:47010 deadline: 1685987728216, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-06-05 17:55:18,228 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987704014 newFile=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987718217 2023-06-05 17:55:18,228 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-06-05 17:55:18,229 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987704014 with entries=5, filesize=2.11 KB; new WAL /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987718217 2023-06-05 17:55:18,229 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38717,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK], DatanodeInfoWithStorage[127.0.0.1:35095,DS-2c4df25d-980e-464d-adb8-91c953cf91d6,DISK]] 2023-06-05 17:55:18,229 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43909,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:18,229 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987704014 is not closed yet, will try archiving it next time 2023-06-05 17:55:18,229 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987704014; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43909,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:30,320 INFO [Listener at localhost.localdomain/37411] wal.AbstractTestLogRolling(233): Validated row row1003 2023-06-05 17:55:32,323 WARN [Listener at localhost.localdomain/37411] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-05 17:55:32,324 WARN [ResponseProcessor for block BP-2107108549-148.251.75.209-1685987702398:blk_1073741838_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-2107108549-148.251.75.209-1685987702398:blk_1073741838_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-05 17:55:32,325 WARN [DataStreamer for file /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987718217 block BP-2107108549-148.251.75.209-1685987702398:blk_1073741838_1017] hdfs.DataStreamer(1548): Error Recovery for BP-2107108549-148.251.75.209-1685987702398:blk_1073741838_1017 in pipeline [DatanodeInfoWithStorage[127.0.0.1:38717,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK], DatanodeInfoWithStorage[127.0.0.1:35095,DS-2c4df25d-980e-464d-adb8-91c953cf91d6,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:38717,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK]) is bad. 2023-06-05 17:55:32,330 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_108235545_17 at /127.0.0.1:42442 [Receiving block BP-2107108549-148.251.75.209-1685987702398:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:35095:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42442 dst: /127.0.0.1:35095 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:35095 remote=/127.0.0.1:42442]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:32,331 WARN [PacketResponder: BP-2107108549-148.251.75.209-1685987702398:blk_1073741838_1017, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:35095]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:32,331 INFO [Listener at localhost.localdomain/37411] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-05 17:55:32,332 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_108235545_17 at /127.0.0.1:51018 [Receiving block BP-2107108549-148.251.75.209-1685987702398:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:38717:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:51018 dst: /127.0.0.1:38717 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:32,439 WARN [BP-2107108549-148.251.75.209-1685987702398 heartbeating to localhost.localdomain/127.0.0.1:44149] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-05 17:55:32,439 WARN [BP-2107108549-148.251.75.209-1685987702398 heartbeating to localhost.localdomain/127.0.0.1:44149] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2107108549-148.251.75.209-1685987702398 (Datanode Uuid 06eb08d6-de3f-4d7c-942b-f8394d321ea0) service to localhost.localdomain/127.0.0.1:44149 2023-06-05 17:55:32,441 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/cluster_abce875d-ad65-ad0a-f24c-ab0c5aece5bd/dfs/data/data1/current/BP-2107108549-148.251.75.209-1685987702398] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:55:32,441 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/cluster_abce875d-ad65-ad0a-f24c-ab0c5aece5bd/dfs/data/data2/current/BP-2107108549-148.251.75.209-1685987702398] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:55:32,449 WARN [Listener at localhost.localdomain/37411] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-05 17:55:32,452 WARN [Listener at localhost.localdomain/37411] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:55:32,453 INFO [Listener at localhost.localdomain/37411] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:55:32,458 INFO [Listener at localhost.localdomain/37411] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/java.io.tmpdir/Jetty_localhost_33069_datanode____dnibqy/webapp 2023-06-05 17:55:32,530 INFO [Listener at localhost.localdomain/37411] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33069 2023-06-05 17:55:32,538 WARN [Listener at localhost.localdomain/45367] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:55:32,541 WARN [Listener at localhost.localdomain/45367] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-05 17:55:32,541 WARN [ResponseProcessor for block BP-2107108549-148.251.75.209-1685987702398:blk_1073741838_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-2107108549-148.251.75.209-1685987702398:blk_1073741838_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-05 17:55:32,545 INFO [Listener at localhost.localdomain/45367] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-05 17:55:32,604 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x87885ae8d58cf78d: Processing first storage report for DS-f91502fb-6ebb-4d50-8083-e223d241b5c8 from datanode 06eb08d6-de3f-4d7c-942b-f8394d321ea0 2023-06-05 17:55:32,604 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x87885ae8d58cf78d: from storage DS-f91502fb-6ebb-4d50-8083-e223d241b5c8 node DatanodeRegistration(127.0.0.1:34105, datanodeUuid=06eb08d6-de3f-4d7c-942b-f8394d321ea0, infoPort=39687, infoSecurePort=0, ipcPort=45367, storageInfo=lv=-57;cid=testClusterID;nsid=27364690;c=1685987702398), blocks: 8, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-05 17:55:32,604 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x87885ae8d58cf78d: Processing first storage report for DS-825459d5-0650-4d60-b8e2-228314b4b274 from datanode 06eb08d6-de3f-4d7c-942b-f8394d321ea0 2023-06-05 17:55:32,604 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x87885ae8d58cf78d: from storage DS-825459d5-0650-4d60-b8e2-228314b4b274 node DatanodeRegistration(127.0.0.1:34105, datanodeUuid=06eb08d6-de3f-4d7c-942b-f8394d321ea0, infoPort=39687, infoSecurePort=0, ipcPort=45367, storageInfo=lv=-57;cid=testClusterID;nsid=27364690;c=1685987702398), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:55:32,648 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_108235545_17 at /127.0.0.1:46680 [Receiving block BP-2107108549-148.251.75.209-1685987702398:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:35095:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46680 dst: /127.0.0.1:35095 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:32,649 WARN [BP-2107108549-148.251.75.209-1685987702398 heartbeating to localhost.localdomain/127.0.0.1:44149] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-05 17:55:32,649 WARN [BP-2107108549-148.251.75.209-1685987702398 heartbeating to localhost.localdomain/127.0.0.1:44149] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2107108549-148.251.75.209-1685987702398 (Datanode Uuid f86d8251-b914-47c8-b35d-f50ac8b70d7f) service to localhost.localdomain/127.0.0.1:44149 2023-06-05 17:55:32,650 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/cluster_abce875d-ad65-ad0a-f24c-ab0c5aece5bd/dfs/data/data3/current/BP-2107108549-148.251.75.209-1685987702398] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:55:32,650 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/cluster_abce875d-ad65-ad0a-f24c-ab0c5aece5bd/dfs/data/data4/current/BP-2107108549-148.251.75.209-1685987702398] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:55:32,656 WARN [Listener at localhost.localdomain/45367] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-05 17:55:32,658 WARN [Listener at localhost.localdomain/45367] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:55:32,660 INFO [Listener at localhost.localdomain/45367] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:55:32,668 INFO [Listener at localhost.localdomain/45367] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/java.io.tmpdir/Jetty_localhost_36805_datanode____.qrn2u7/webapp 2023-06-05 17:55:32,751 INFO [Listener at localhost.localdomain/45367] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36805 2023-06-05 17:55:32,761 WARN [Listener at localhost.localdomain/46325] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:55:32,875 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3a93009afbcc6721: Processing first storage report for DS-2c4df25d-980e-464d-adb8-91c953cf91d6 from datanode f86d8251-b914-47c8-b35d-f50ac8b70d7f 2023-06-05 17:55:32,875 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3a93009afbcc6721: from storage DS-2c4df25d-980e-464d-adb8-91c953cf91d6 node DatanodeRegistration(127.0.0.1:36391, datanodeUuid=f86d8251-b914-47c8-b35d-f50ac8b70d7f, infoPort=41029, infoSecurePort=0, ipcPort=46325, storageInfo=lv=-57;cid=testClusterID;nsid=27364690;c=1685987702398), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:55:32,875 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3a93009afbcc6721: Processing first storage report for DS-dc110e91-e793-428a-95dc-325d8550ee9b from datanode f86d8251-b914-47c8-b35d-f50ac8b70d7f 2023-06-05 17:55:32,875 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3a93009afbcc6721: from storage DS-dc110e91-e793-428a-95dc-325d8550ee9b node DatanodeRegistration(127.0.0.1:36391, datanodeUuid=f86d8251-b914-47c8-b35d-f50ac8b70d7f, infoPort=41029, infoSecurePort=0, ipcPort=46325, storageInfo=lv=-57;cid=testClusterID;nsid=27364690;c=1685987702398), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:55:33,766 INFO [Listener at localhost.localdomain/46325] wal.TestLogRolling(498): Data Nodes restarted 2023-06-05 17:55:33,768 INFO [Listener at localhost.localdomain/46325] wal.AbstractTestLogRolling(233): Validated row row1004 2023-06-05 17:55:33,769 WARN [RS:0;jenkins-hbase20:42693.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=8, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35095,DS-2c4df25d-980e-464d-adb8-91c953cf91d6,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:33,771 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C42693%2C1685987703074:(num 1685987718217) roll requested 2023-06-05 17:55:33,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35095,DS-2c4df25d-980e-464d-adb8-91c953cf91d6,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:33,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42693] ipc.CallRunner(144): callId: 18 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:47010 deadline: 1685987743768, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-06-05 17:55:33,776 WARN [master/jenkins-hbase20:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43909,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:33,776 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C39347%2C1685987703028:(num 1685987703647) roll requested 2023-06-05 17:55:33,776 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43909,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:33,777 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43909,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:33,783 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987718217 newFile=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987733771 2023-06-05 17:55:33,783 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-06-05 17:55:33,783 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987718217 with entries=2, filesize=2.37 KB; new WAL /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987733771 2023-06-05 17:55:33,784 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35095,DS-2c4df25d-980e-464d-adb8-91c953cf91d6,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:33,784 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36391,DS-2c4df25d-980e-464d-adb8-91c953cf91d6,DISK], DatanodeInfoWithStorage[127.0.0.1:34105,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK]] 2023-06-05 17:55:33,784 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987718217 is not closed yet, will try archiving it next time 2023-06-05 17:55:33,786 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987718217; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35095,DS-2c4df25d-980e-464d-adb8-91c953cf91d6,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:33,787 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-06-05 17:55:33,787 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/MasterData/WALs/jenkins-hbase20.apache.org,39347,1685987703028/jenkins-hbase20.apache.org%2C39347%2C1685987703028.1685987703647 with entries=88, filesize=43.82 KB; new WAL /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/MasterData/WALs/jenkins-hbase20.apache.org,39347,1685987703028/jenkins-hbase20.apache.org%2C39347%2C1685987703028.1685987733776 2023-06-05 17:55:33,788 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34105,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK], DatanodeInfoWithStorage[127.0.0.1:36391,DS-2c4df25d-980e-464d-adb8-91c953cf91d6,DISK]] 2023-06-05 17:55:33,788 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/MasterData/WALs/jenkins-hbase20.apache.org,39347,1685987703028/jenkins-hbase20.apache.org%2C39347%2C1685987703028.1685987703647 is not closed yet, will try archiving it next time 2023-06-05 17:55:33,788 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43909,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:33,788 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/MasterData/WALs/jenkins-hbase20.apache.org,39347,1685987703028/jenkins-hbase20.apache.org%2C39347%2C1685987703028.1685987703647; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43909,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:45,869 DEBUG [Listener at localhost.localdomain/46325] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987733771 newFile=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987745858 2023-06-05 17:55:45,870 INFO [Listener at localhost.localdomain/46325] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987733771 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987745858 2023-06-05 17:55:45,877 DEBUG [Listener at localhost.localdomain/46325] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36391,DS-2c4df25d-980e-464d-adb8-91c953cf91d6,DISK], DatanodeInfoWithStorage[127.0.0.1:34105,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK]] 2023-06-05 17:55:45,877 DEBUG [Listener at localhost.localdomain/46325] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987733771 is not closed yet, will try archiving it next time 2023-06-05 17:55:45,877 DEBUG [Listener at localhost.localdomain/46325] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987704014 2023-06-05 17:55:45,878 INFO [Listener at localhost.localdomain/46325] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987704014 2023-06-05 17:55:45,882 WARN [IPC Server handler 1 on default port 44149] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987704014 has not been closed. Lease recovery is in progress. RecoveryId = 1022 for block blk_1073741832_1015 2023-06-05 17:55:45,884 INFO [Listener at localhost.localdomain/46325] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987704014 after 5ms 2023-06-05 17:55:46,900 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@1962c186] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-2107108549-148.251.75.209-1685987702398:blk_1073741832_1015, datanode=DatanodeInfoWithStorage[127.0.0.1:36391,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741832_1015, replica=ReplicaWaitingToBeRecovered, blk_1073741832_1008, RWR getNumBytes() = 2162 getBytesOnDisk() = 2162 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/cluster_abce875d-ad65-ad0a-f24c-ab0c5aece5bd/dfs/data/data4/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/cluster_abce875d-ad65-ad0a-f24c-ab0c5aece5bd/dfs/data/data4/current/BP-2107108549-148.251.75.209-1685987702398/current/rbw/blk_1073741832 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:49,885 INFO [Listener at localhost.localdomain/46325] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987704014 after 4006ms 2023-06-05 17:55:49,885 DEBUG [Listener at localhost.localdomain/46325] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987704014 2023-06-05 17:55:49,894 DEBUG [Listener at localhost.localdomain/46325] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1685987704675/Put/vlen=176/seqid=0] 2023-06-05 17:55:49,895 DEBUG [Listener at localhost.localdomain/46325] wal.TestLogRolling(522): #4: [default/info:d/1685987704714/Put/vlen=9/seqid=0] 2023-06-05 17:55:49,895 DEBUG [Listener at localhost.localdomain/46325] wal.TestLogRolling(522): #5: [hbase/info:d/1685987704737/Put/vlen=7/seqid=0] 2023-06-05 17:55:49,895 DEBUG [Listener at localhost.localdomain/46325] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1685987705224/Put/vlen=232/seqid=0] 2023-06-05 17:55:49,895 DEBUG [Listener at localhost.localdomain/46325] wal.TestLogRolling(522): #4: [row1002/info:/1685987714858/Put/vlen=1045/seqid=0] 2023-06-05 17:55:49,895 DEBUG [Listener at localhost.localdomain/46325] wal.ProtobufLogReader(420): EOF at position 2162 2023-06-05 17:55:49,895 DEBUG [Listener at localhost.localdomain/46325] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987718217 2023-06-05 17:55:49,895 INFO [Listener at localhost.localdomain/46325] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987718217 2023-06-05 17:55:49,896 WARN [IPC Server handler 4 on default port 44149] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987718217 has not been closed. Lease recovery is in progress. RecoveryId = 1023 for block blk_1073741838_1018 2023-06-05 17:55:49,896 INFO [Listener at localhost.localdomain/46325] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987718217 after 1ms 2023-06-05 17:55:50,878 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@6fe62170] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-2107108549-148.251.75.209-1685987702398:blk_1073741838_1018, datanode=DatanodeInfoWithStorage[127.0.0.1:34105,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/cluster_abce875d-ad65-ad0a-f24c-ab0c5aece5bd/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/cluster_abce875d-ad65-ad0a-f24c-ab0c5aece5bd/dfs/data/data1/current/BP-2107108549-148.251.75.209-1685987702398/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:348) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/cluster_abce875d-ad65-ad0a-f24c-ab0c5aece5bd/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/cluster_abce875d-ad65-ad0a-f24c-ab0c5aece5bd/dfs/data/data1/current/BP-2107108549-148.251.75.209-1685987702398/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy43.initReplicaRecovery(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolTranslatorPB.initReplicaRecovery(InterDatanodeProtocolTranslatorPB.java:83) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) ... 4 more 2023-06-05 17:55:53,897 INFO [Listener at localhost.localdomain/46325] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987718217 after 4002ms 2023-06-05 17:55:53,898 DEBUG [Listener at localhost.localdomain/46325] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987718217 2023-06-05 17:55:53,904 DEBUG [Listener at localhost.localdomain/46325] wal.TestLogRolling(522): #6: [row1003/info:/1685987728316/Put/vlen=1045/seqid=0] 2023-06-05 17:55:53,904 DEBUG [Listener at localhost.localdomain/46325] wal.TestLogRolling(522): #7: [row1004/info:/1685987730321/Put/vlen=1045/seqid=0] 2023-06-05 17:55:53,904 DEBUG [Listener at localhost.localdomain/46325] wal.ProtobufLogReader(420): EOF at position 2425 2023-06-05 17:55:53,904 DEBUG [Listener at localhost.localdomain/46325] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987733771 2023-06-05 17:55:53,904 INFO [Listener at localhost.localdomain/46325] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987733771 2023-06-05 17:55:53,905 INFO [Listener at localhost.localdomain/46325] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987733771 after 1ms 2023-06-05 17:55:53,905 DEBUG [Listener at localhost.localdomain/46325] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987733771 2023-06-05 17:55:53,912 DEBUG [Listener at localhost.localdomain/46325] wal.TestLogRolling(522): #9: [row1005/info:/1685987743855/Put/vlen=1045/seqid=0] 2023-06-05 17:55:53,912 DEBUG [Listener at localhost.localdomain/46325] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987745858 2023-06-05 17:55:53,912 INFO [Listener at localhost.localdomain/46325] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987745858 2023-06-05 17:55:53,913 WARN [IPC Server handler 3 on default port 44149] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987745858 has not been closed. Lease recovery is in progress. RecoveryId = 1024 for block blk_1073741841_1021 2023-06-05 17:55:53,913 INFO [Listener at localhost.localdomain/46325] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987745858 after 1ms 2023-06-05 17:55:54,877 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_63746643_17 at /127.0.0.1:34896 [Receiving block BP-2107108549-148.251.75.209-1685987702398:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:36391:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34896 dst: /127.0.0.1:36391 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:36391 remote=/127.0.0.1:34896]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:54,878 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_63746643_17 at /127.0.0.1:55946 [Receiving block BP-2107108549-148.251.75.209-1685987702398:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:34105:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:55946 dst: /127.0.0.1:34105 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:54,878 WARN [ResponseProcessor for block BP-2107108549-148.251.75.209-1685987702398:blk_1073741841_1021] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-2107108549-148.251.75.209-1685987702398:blk_1073741841_1021 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-05 17:55:54,878 WARN [DataStreamer for file /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987745858 block BP-2107108549-148.251.75.209-1685987702398:blk_1073741841_1021] hdfs.DataStreamer(1548): Error Recovery for BP-2107108549-148.251.75.209-1685987702398:blk_1073741841_1021 in pipeline [DatanodeInfoWithStorage[127.0.0.1:36391,DS-2c4df25d-980e-464d-adb8-91c953cf91d6,DISK], DatanodeInfoWithStorage[127.0.0.1:34105,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:36391,DS-2c4df25d-980e-464d-adb8-91c953cf91d6,DISK]) is bad. 2023-06-05 17:55:54,886 WARN [DataStreamer for file /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987745858 block BP-2107108549-148.251.75.209-1685987702398:blk_1073741841_1021] hdfs.DataStreamer(823): DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-2107108549-148.251.75.209-1685987702398:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:57,914 INFO [Listener at localhost.localdomain/46325] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987745858 after 4002ms 2023-06-05 17:55:57,914 DEBUG [Listener at localhost.localdomain/46325] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987745858 2023-06-05 17:55:57,919 DEBUG [Listener at localhost.localdomain/46325] wal.ProtobufLogReader(420): EOF at position 83 2023-06-05 17:55:57,921 INFO [Listener at localhost.localdomain/46325] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.96 KB heapSize=5.48 KB 2023-06-05 17:55:57,921 WARN [RS_OPEN_META-regionserver/jenkins-hbase20:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43909,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:57,921 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C42693%2C1685987703074.meta:.meta(num 1685987704210) roll requested 2023-06-05 17:55:57,921 DEBUG [Listener at localhost.localdomain/46325] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-06-05 17:55:57,921 INFO [Listener at localhost.localdomain/46325] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43909,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:57,922 INFO [Listener at localhost.localdomain/46325] regionserver.HRegion(2745): Flushing 215aae13234348d9e24b41b6b6aaf76f 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-05 17:55:57,924 WARN [RS:0;jenkins-hbase20:42693.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=7, requesting roll of WAL org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-2107108549-148.251.75.209-1685987702398:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:57,924 DEBUG [Listener at localhost.localdomain/46325] regionserver.HRegion(2446): Flush status journal for 215aae13234348d9e24b41b6b6aaf76f: 2023-06-05 17:55:57,924 INFO [Listener at localhost.localdomain/46325] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-2107108549-148.251.75.209-1685987702398:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:57,928 INFO [Listener at localhost.localdomain/46325] regionserver.HRegion(2745): Flushing 538d0e4b8418d489a83d6c683fd46ff3 1/1 column families, dataSize=4.20 KB heapSize=4.75 KB 2023-06-05 17:55:57,929 DEBUG [Listener at localhost.localdomain/46325] regionserver.HRegion(2446): Flush status journal for 538d0e4b8418d489a83d6c683fd46ff3: 2023-06-05 17:55:57,929 INFO [Listener at localhost.localdomain/46325] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-2107108549-148.251.75.209-1685987702398:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:57,936 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-05 17:55:57,937 INFO [Listener at localhost.localdomain/46325] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-06-05 17:55:57,937 DEBUG [Listener at localhost.localdomain/46325] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1f943fba to 127.0.0.1:62057 2023-06-05 17:55:57,937 DEBUG [Listener at localhost.localdomain/46325] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:55:57,937 DEBUG [Listener at localhost.localdomain/46325] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-05 17:55:57,937 DEBUG [Listener at localhost.localdomain/46325] util.JVMClusterUtil(257): Found active master hash=1176013050, stopped=false 2023-06-05 17:55:57,937 INFO [Listener at localhost.localdomain/46325] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,39347,1685987703028 2023-06-05 17:55:57,938 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL 2023-06-05 17:55:57,938 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.meta.1685987704210.meta with entries=11, filesize=3.72 KB; new WAL /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.meta.1685987757921.meta 2023-06-05 17:55:57,938 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36391,DS-2c4df25d-980e-464d-adb8-91c953cf91d6,DISK], DatanodeInfoWithStorage[127.0.0.1:34105,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK]] 2023-06-05 17:55:57,938 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.meta.1685987704210.meta is not closed yet, will try archiving it next time 2023-06-05 17:55:57,939 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43909,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:57,939 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C42693%2C1685987703074:(num 1685987745858) roll requested 2023-06-05 17:55:57,939 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-05 17:55:57,939 INFO [Listener at localhost.localdomain/46325] procedure2.ProcedureExecutor(629): Stopping 2023-06-05 17:55:57,939 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:55:57,939 DEBUG [Listener at localhost.localdomain/46325] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x45958e40 to 127.0.0.1:62057 2023-06-05 17:55:57,939 DEBUG [Listener at localhost.localdomain/46325] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:55:57,939 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:55:57,939 INFO [Listener at localhost.localdomain/46325] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,42693,1685987703074' ***** 2023-06-05 17:55:57,939 INFO [Listener at localhost.localdomain/46325] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-05 17:55:57,939 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): regionserver:42693-0x101bc68d0f20001, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-05 17:55:57,940 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.meta.1685987704210.meta; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:43909,DS-f91502fb-6ebb-4d50-8083-e223d241b5c8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:57,942 INFO [RS:0;jenkins-hbase20:42693] regionserver.HeapMemoryManager(220): Stopping 2023-06-05 17:55:57,942 INFO [RS:0;jenkins-hbase20:42693] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-05 17:55:57,942 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42693-0x101bc68d0f20001, quorum=127.0.0.1:62057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:55:57,943 INFO [RS:0;jenkins-hbase20:42693] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-05 17:55:57,942 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-05 17:55:57,943 INFO [RS:0;jenkins-hbase20:42693] regionserver.HRegionServer(3303): Received CLOSE for 215aae13234348d9e24b41b6b6aaf76f 2023-06-05 17:55:57,943 INFO [RS:0;jenkins-hbase20:42693] regionserver.HRegionServer(3303): Received CLOSE for 538d0e4b8418d489a83d6c683fd46ff3 2023-06-05 17:55:57,943 INFO [RS:0;jenkins-hbase20:42693] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,42693,1685987703074 2023-06-05 17:55:57,944 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 215aae13234348d9e24b41b6b6aaf76f, disabling compactions & flushes 2023-06-05 17:55:57,944 DEBUG [RS:0;jenkins-hbase20:42693] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5b8938a8 to 127.0.0.1:62057 2023-06-05 17:55:57,944 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f. 2023-06-05 17:55:57,944 DEBUG [RS:0;jenkins-hbase20:42693] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:55:57,944 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f. 2023-06-05 17:55:57,944 INFO [RS:0;jenkins-hbase20:42693] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-05 17:55:57,944 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f. after waiting 0 ms 2023-06-05 17:55:57,944 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f. 2023-06-05 17:55:57,944 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 215aae13234348d9e24b41b6b6aaf76f 1/1 column families, dataSize=78 B heapSize=728 B 2023-06-05 17:55:57,944 INFO [RS:0;jenkins-hbase20:42693] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-05 17:55:57,944 INFO [RS:0;jenkins-hbase20:42693] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-05 17:55:57,944 INFO [RS:0;jenkins-hbase20:42693] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-05 17:55:57,944 WARN [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultMemStore(90): Snapshot called again without clearing previous. Doing nothing. Another ongoing flush or did we fail last attempt? 2023-06-05 17:55:57,944 INFO [RS:0;jenkins-hbase20:42693] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-06-05 17:55:57,945 DEBUG [RS:0;jenkins-hbase20:42693] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 215aae13234348d9e24b41b6b6aaf76f=hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f., 538d0e4b8418d489a83d6c683fd46ff3=TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3.} 2023-06-05 17:55:57,945 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-05 17:55:57,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 215aae13234348d9e24b41b6b6aaf76f: 2023-06-05 17:55:57,945 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-05 17:55:57,945 DEBUG [RS:0;jenkins-hbase20:42693] regionserver.HRegionServer(1504): Waiting on 1588230740, 215aae13234348d9e24b41b6b6aaf76f, 538d0e4b8418d489a83d6c683fd46ff3 2023-06-05 17:55:57,945 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase20.apache.org,42693,1685987703074: Unrecoverable exception while closing hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f. ***** org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-2107108549-148.251.75.209-1685987702398:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:57,945 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-05 17:55:57,945 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-06-05 17:55:57,945 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-05 17:55:57,945 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-05 17:55:57,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-06-05 17:55:57,945 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-05 17:55:57,945 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-06-05 17:55:57,946 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-06-05 17:55:57,946 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-06-05 17:55:57,946 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-06-05 17:55:57,946 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1093664768, "init": 524288000, "max": 2051014656, "used": 338345504 }, "NonHeapMemoryUsage": { "committed": 139026432, "init": 2555904, "max": -1, "used": 136500200 }, "Verbose": false, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-06-05 17:55:57,947 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39347] master.MasterRpcServices(609): jenkins-hbase20.apache.org,42693,1685987703074 reported a fatal error: ***** ABORTING region server jenkins-hbase20.apache.org,42693,1685987703074: Unrecoverable exception while closing hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f. ***** Cause: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-2107108549-148.251.75.209-1685987702398:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:57,952 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 538d0e4b8418d489a83d6c683fd46ff3, disabling compactions & flushes 2023-06-05 17:55:57,952 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3. 2023-06-05 17:55:57,952 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3. 2023-06-05 17:55:57,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3. after waiting 0 ms 2023-06-05 17:55:57,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3. 2023-06-05 17:55:57,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 538d0e4b8418d489a83d6c683fd46ff3: 2023-06-05 17:55:57,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3. 2023-06-05 17:55:57,956 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987745858 newFile=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987757939 2023-06-05 17:55:57,956 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL 2023-06-05 17:55:57,956 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987745858 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987757939 2023-06-05 17:55:57,956 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-2107108549-148.251.75.209-1685987702398:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:57,956 ERROR [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(462): Close of WAL hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987745858 failed. Cause="Unexpected BlockUCState: BP-2107108549-148.251.75.209-1685987702398:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) ", errors=3, hasUnflushedEntries=false 2023-06-05 17:55:57,956 ERROR [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(426): Failed close of WAL writer hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987745858, unflushedEntries=0 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-2107108549-148.251.75.209-1685987702398:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:57,957 ERROR [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(221): Roll wal failed and waiting timeout, will not retry org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074/jenkins-hbase20.apache.org%2C42693%2C1685987703074.1685987745858, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-2107108549-148.251.75.209-1685987702398:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-05 17:55:57,959 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074 2023-06-05 17:55:57,960 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:324) at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:151) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:105) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.writeWALTrailerAndMagic(ProtobufLogWriter.java:140) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.writeWALTrailer(AbstractProtobufLogWriter.java:234) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.close(ProtobufLogWriter.java:67) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doShutdown(FSHLog.java:492) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:951) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:946) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-06-05 17:55:57,962 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/WALs/jenkins-hbase20.apache.org,42693,1685987703074 2023-06-05 17:55:57,967 DEBUG [regionserver/jenkins-hbase20:0.logRoller] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Failed log close in log roller 2023-06-05 17:55:58,145 INFO [RS:0;jenkins-hbase20:42693] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-05 17:55:58,145 INFO [RS:0;jenkins-hbase20:42693] regionserver.HRegionServer(3303): Received CLOSE for 215aae13234348d9e24b41b6b6aaf76f 2023-06-05 17:55:58,145 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-05 17:55:58,145 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 215aae13234348d9e24b41b6b6aaf76f, disabling compactions & flushes 2023-06-05 17:55:58,145 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-05 17:55:58,145 INFO [RS:0;jenkins-hbase20:42693] regionserver.HRegionServer(3303): Received CLOSE for 538d0e4b8418d489a83d6c683fd46ff3 2023-06-05 17:55:58,145 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-05 17:55:58,145 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f. 2023-06-05 17:55:58,146 DEBUG [RS:0;jenkins-hbase20:42693] regionserver.HRegionServer(1504): Waiting on 1588230740, 215aae13234348d9e24b41b6b6aaf76f, 538d0e4b8418d489a83d6c683fd46ff3 2023-06-05 17:55:58,146 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-05 17:55:58,146 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f. 2023-06-05 17:55:58,146 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-05 17:55:58,146 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f. after waiting 0 ms 2023-06-05 17:55:58,146 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-05 17:55:58,146 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f. 2023-06-05 17:55:58,146 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-06-05 17:55:58,146 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 215aae13234348d9e24b41b6b6aaf76f: 2023-06-05 17:55:58,146 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1685987704297.215aae13234348d9e24b41b6b6aaf76f. 2023-06-05 17:55:58,146 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 538d0e4b8418d489a83d6c683fd46ff3, disabling compactions & flushes 2023-06-05 17:55:58,146 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3. 2023-06-05 17:55:58,146 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3. 2023-06-05 17:55:58,146 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3. after waiting 0 ms 2023-06-05 17:55:58,146 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3. 2023-06-05 17:55:58,146 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 538d0e4b8418d489a83d6c683fd46ff3: 2023-06-05 17:55:58,146 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1685987704838.538d0e4b8418d489a83d6c683fd46ff3. 2023-06-05 17:55:58,346 INFO [RS:0;jenkins-hbase20:42693] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-06-05 17:55:58,346 INFO [RS:0;jenkins-hbase20:42693] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,42693,1685987703074; all regions closed. 2023-06-05 17:55:58,346 DEBUG [RS:0;jenkins-hbase20:42693] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:55:58,346 INFO [RS:0;jenkins-hbase20:42693] regionserver.LeaseManager(133): Closed leases 2023-06-05 17:55:58,347 INFO [RS:0;jenkins-hbase20:42693] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-05 17:55:58,347 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-05 17:55:58,348 INFO [RS:0;jenkins-hbase20:42693] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:42693 2023-06-05 17:55:58,350 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): regionserver:42693-0x101bc68d0f20001, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,42693,1685987703074 2023-06-05 17:55:58,350 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): regionserver:42693-0x101bc68d0f20001, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-05 17:55:58,350 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-05 17:55:58,350 ERROR [Listener at localhost.localdomain/38071-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@547da667 rejected from java.util.concurrent.ThreadPoolExecutor@50423830[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 5] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-06-05 17:55:58,351 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,42693,1685987703074] 2023-06-05 17:55:58,351 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,42693,1685987703074; numProcessing=1 2023-06-05 17:55:58,352 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,42693,1685987703074 already deleted, retry=false 2023-06-05 17:55:58,352 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,42693,1685987703074 expired; onlineServers=0 2023-06-05 17:55:58,352 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,39347,1685987703028' ***** 2023-06-05 17:55:58,352 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-05 17:55:58,352 DEBUG [M:0;jenkins-hbase20:39347] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@422714e1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-05 17:55:58,352 INFO [M:0;jenkins-hbase20:39347] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,39347,1685987703028 2023-06-05 17:55:58,352 INFO [M:0;jenkins-hbase20:39347] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,39347,1685987703028; all regions closed. 2023-06-05 17:55:58,352 DEBUG [M:0;jenkins-hbase20:39347] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:55:58,352 DEBUG [M:0;jenkins-hbase20:39347] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-05 17:55:58,353 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-05 17:55:58,353 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685987703787] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685987703787,5,FailOnTimeoutGroup] 2023-06-05 17:55:58,353 DEBUG [M:0;jenkins-hbase20:39347] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-05 17:55:58,354 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-05 17:55:58,353 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685987703787] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685987703787,5,FailOnTimeoutGroup] 2023-06-05 17:55:58,354 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:55:58,354 INFO [M:0;jenkins-hbase20:39347] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-05 17:55:58,354 INFO [M:0;jenkins-hbase20:39347] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-05 17:55:58,354 INFO [M:0;jenkins-hbase20:39347] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-06-05 17:55:58,354 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-05 17:55:58,354 DEBUG [M:0;jenkins-hbase20:39347] master.HMaster(1512): Stopping service threads 2023-06-05 17:55:58,355 INFO [M:0;jenkins-hbase20:39347] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-05 17:55:58,355 ERROR [M:0;jenkins-hbase20:39347] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-05 17:55:58,355 INFO [M:0;jenkins-hbase20:39347] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-05 17:55:58,355 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-05 17:55:58,355 DEBUG [M:0;jenkins-hbase20:39347] zookeeper.ZKUtil(398): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-05 17:55:58,356 WARN [M:0;jenkins-hbase20:39347] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-05 17:55:58,356 INFO [M:0;jenkins-hbase20:39347] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-05 17:55:58,356 INFO [M:0;jenkins-hbase20:39347] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-05 17:55:58,356 DEBUG [M:0;jenkins-hbase20:39347] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-05 17:55:58,356 INFO [M:0;jenkins-hbase20:39347] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:55:58,356 DEBUG [M:0;jenkins-hbase20:39347] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:55:58,356 DEBUG [M:0;jenkins-hbase20:39347] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-05 17:55:58,356 DEBUG [M:0;jenkins-hbase20:39347] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:55:58,357 INFO [M:0;jenkins-hbase20:39347] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.20 KB heapSize=45.83 KB 2023-06-05 17:55:58,371 INFO [M:0;jenkins-hbase20:39347] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.20 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/80b4e5dd189e43d0b5fb0124c48e7956 2023-06-05 17:55:58,379 DEBUG [M:0;jenkins-hbase20:39347] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/80b4e5dd189e43d0b5fb0124c48e7956 as hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/80b4e5dd189e43d0b5fb0124c48e7956 2023-06-05 17:55:58,386 INFO [M:0;jenkins-hbase20:39347] regionserver.HStore(1080): Added hdfs://localhost.localdomain:44149/user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/80b4e5dd189e43d0b5fb0124c48e7956, entries=11, sequenceid=92, filesize=7.0 K 2023-06-05 17:55:58,387 INFO [M:0;jenkins-hbase20:39347] regionserver.HRegion(2948): Finished flush of dataSize ~38.20 KB/39113, heapSize ~45.81 KB/46912, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 30ms, sequenceid=92, compaction requested=false 2023-06-05 17:55:58,388 INFO [M:0;jenkins-hbase20:39347] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:55:58,388 DEBUG [M:0;jenkins-hbase20:39347] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-05 17:55:58,389 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/a61dcdc1-74a6-046f-0431-b6c0a5064f66/MasterData/WALs/jenkins-hbase20.apache.org,39347,1685987703028 2023-06-05 17:55:58,393 INFO [M:0;jenkins-hbase20:39347] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-05 17:55:58,393 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-05 17:55:58,393 INFO [M:0;jenkins-hbase20:39347] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:39347 2023-06-05 17:55:58,396 DEBUG [M:0;jenkins-hbase20:39347] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,39347,1685987703028 already deleted, retry=false 2023-06-05 17:55:58,451 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): regionserver:42693-0x101bc68d0f20001, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:55:58,451 INFO [RS:0;jenkins-hbase20:42693] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,42693,1685987703074; zookeeper connection closed. 2023-06-05 17:55:58,451 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): regionserver:42693-0x101bc68d0f20001, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:55:58,452 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@47fcd702] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@47fcd702 2023-06-05 17:55:58,456 INFO [Listener at localhost.localdomain/46325] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-05 17:55:58,551 INFO [M:0;jenkins-hbase20:39347] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,39347,1685987703028; zookeeper connection closed. 2023-06-05 17:55:58,551 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:55:58,552 DEBUG [Listener at localhost.localdomain/38071-EventThread] zookeeper.ZKWatcher(600): master:39347-0x101bc68d0f20000, quorum=127.0.0.1:62057, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:55:58,552 WARN [Listener at localhost.localdomain/46325] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-05 17:55:58,568 INFO [Listener at localhost.localdomain/46325] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-05 17:55:58,674 WARN [BP-2107108549-148.251.75.209-1685987702398 heartbeating to localhost.localdomain/127.0.0.1:44149] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-05 17:55:58,674 WARN [BP-2107108549-148.251.75.209-1685987702398 heartbeating to localhost.localdomain/127.0.0.1:44149] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2107108549-148.251.75.209-1685987702398 (Datanode Uuid f86d8251-b914-47c8-b35d-f50ac8b70d7f) service to localhost.localdomain/127.0.0.1:44149 2023-06-05 17:55:58,675 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/cluster_abce875d-ad65-ad0a-f24c-ab0c5aece5bd/dfs/data/data3/current/BP-2107108549-148.251.75.209-1685987702398] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:55:58,675 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/cluster_abce875d-ad65-ad0a-f24c-ab0c5aece5bd/dfs/data/data4/current/BP-2107108549-148.251.75.209-1685987702398] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:55:58,679 WARN [Listener at localhost.localdomain/46325] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-05 17:55:58,719 INFO [Listener at localhost.localdomain/46325] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-05 17:55:58,719 WARN [158704153@qtp-5562202-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33069] http.HttpServer2$SelectChannelConnectorWithSafeStartup(546): HttpServer Acceptor: isRunning is false. Rechecking. 2023-06-05 17:55:58,721 WARN [158704153@qtp-5562202-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33069] http.HttpServer2$SelectChannelConnectorWithSafeStartup(555): HttpServer Acceptor: isRunning is false 2023-06-05 17:55:58,824 WARN [BP-2107108549-148.251.75.209-1685987702398 heartbeating to localhost.localdomain/127.0.0.1:44149] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-05 17:55:58,824 WARN [BP-2107108549-148.251.75.209-1685987702398 heartbeating to localhost.localdomain/127.0.0.1:44149] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2107108549-148.251.75.209-1685987702398 (Datanode Uuid 06eb08d6-de3f-4d7c-942b-f8394d321ea0) service to localhost.localdomain/127.0.0.1:44149 2023-06-05 17:55:58,825 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/cluster_abce875d-ad65-ad0a-f24c-ab0c5aece5bd/dfs/data/data1/current/BP-2107108549-148.251.75.209-1685987702398] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:55:58,826 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/cluster_abce875d-ad65-ad0a-f24c-ab0c5aece5bd/dfs/data/data2/current/BP-2107108549-148.251.75.209-1685987702398] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:55:58,843 INFO [Listener at localhost.localdomain/46325] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-06-05 17:55:58,959 INFO [Listener at localhost.localdomain/46325] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-05 17:55:58,983 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-05 17:55:58,994 INFO [Listener at localhost.localdomain/46325] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=88 (was 77) Potentially hanging thread: IPC Client (1624275195) connection to localhost.localdomain/127.0.0.1:44149 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (1624275195) connection to localhost.localdomain/127.0.0.1:44149 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-29-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/46325 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1624275195) connection to localhost.localdomain/127.0.0.1:44149 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost.localdomain:44149 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:44149 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=463 (was 471), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=148 (was 127) - SystemLoadAverage LEAK? -, ProcessCount=169 (was 169), AvailableMemoryMB=6386 (was 7372) 2023-06-05 17:55:59,009 INFO [Listener at localhost.localdomain/46325] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=88, OpenFileDescriptor=463, MaxFileDescriptor=60000, SystemLoadAverage=148, ProcessCount=169, AvailableMemoryMB=6383 2023-06-05 17:55:59,009 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-05 17:55:59,010 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/hadoop.log.dir so I do NOT create it in target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9 2023-06-05 17:55:59,010 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e0ae20c6-b91e-fc9f-e161-e3f574f2a9db/hadoop.tmp.dir so I do NOT create it in target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9 2023-06-05 17:55:59,010 INFO [Listener at localhost.localdomain/46325] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/cluster_5c6fa081-b821-ddf8-f727-131e46dca619, deleteOnExit=true 2023-06-05 17:55:59,010 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-05 17:55:59,010 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/test.cache.data in system properties and HBase conf 2023-06-05 17:55:59,010 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/hadoop.tmp.dir in system properties and HBase conf 2023-06-05 17:55:59,010 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/hadoop.log.dir in system properties and HBase conf 2023-06-05 17:55:59,010 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-05 17:55:59,011 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-05 17:55:59,011 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-05 17:55:59,011 DEBUG [Listener at localhost.localdomain/46325] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-05 17:55:59,011 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-05 17:55:59,011 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-05 17:55:59,011 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-05 17:55:59,011 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-05 17:55:59,011 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-05 17:55:59,012 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-05 17:55:59,012 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-05 17:55:59,012 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-05 17:55:59,012 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-05 17:55:59,012 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/nfs.dump.dir in system properties and HBase conf 2023-06-05 17:55:59,012 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/java.io.tmpdir in system properties and HBase conf 2023-06-05 17:55:59,012 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-05 17:55:59,012 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-05 17:55:59,012 INFO [Listener at localhost.localdomain/46325] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-05 17:55:59,014 WARN [Listener at localhost.localdomain/46325] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-05 17:55:59,016 WARN [Listener at localhost.localdomain/46325] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-05 17:55:59,016 WARN [Listener at localhost.localdomain/46325] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-05 17:55:59,050 WARN [Listener at localhost.localdomain/46325] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:55:59,053 INFO [Listener at localhost.localdomain/46325] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:55:59,063 INFO [Listener at localhost.localdomain/46325] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/java.io.tmpdir/Jetty_localhost_localdomain_41533_hdfs____dd6oia/webapp 2023-06-05 17:55:59,152 INFO [Listener at localhost.localdomain/46325] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:41533 2023-06-05 17:55:59,154 WARN [Listener at localhost.localdomain/46325] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-05 17:55:59,157 WARN [Listener at localhost.localdomain/46325] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-05 17:55:59,158 WARN [Listener at localhost.localdomain/46325] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-05 17:55:59,254 WARN [Listener at localhost.localdomain/38221] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:55:59,299 WARN [Listener at localhost.localdomain/38221] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-05 17:55:59,306 WARN [Listener at localhost.localdomain/38221] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:55:59,308 INFO [Listener at localhost.localdomain/38221] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:55:59,313 INFO [Listener at localhost.localdomain/38221] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/java.io.tmpdir/Jetty_localhost_37569_datanode____xicj3v/webapp 2023-06-05 17:55:59,405 INFO [Listener at localhost.localdomain/38221] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37569 2023-06-05 17:55:59,422 WARN [Listener at localhost.localdomain/40243] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:55:59,483 WARN [Listener at localhost.localdomain/40243] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-05 17:55:59,490 WARN [Listener at localhost.localdomain/40243] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:55:59,492 INFO [Listener at localhost.localdomain/40243] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:55:59,503 INFO [Listener at localhost.localdomain/40243] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/java.io.tmpdir/Jetty_localhost_37363_datanode____crc9ir/webapp 2023-06-05 17:55:59,559 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd43cff8b25e95611: Processing first storage report for DS-089525b5-2642-4891-923e-cbf430a2fb2c from datanode be7740f5-cd17-447a-9878-68cce00ab3aa 2023-06-05 17:55:59,559 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd43cff8b25e95611: from storage DS-089525b5-2642-4891-923e-cbf430a2fb2c node DatanodeRegistration(127.0.0.1:35765, datanodeUuid=be7740f5-cd17-447a-9878-68cce00ab3aa, infoPort=38921, infoSecurePort=0, ipcPort=40243, storageInfo=lv=-57;cid=testClusterID;nsid=1302444194;c=1685987759018), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:55:59,559 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd43cff8b25e95611: Processing first storage report for DS-44c60961-8cf8-43ab-a16c-8b690c0103ea from datanode be7740f5-cd17-447a-9878-68cce00ab3aa 2023-06-05 17:55:59,559 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd43cff8b25e95611: from storage DS-44c60961-8cf8-43ab-a16c-8b690c0103ea node DatanodeRegistration(127.0.0.1:35765, datanodeUuid=be7740f5-cd17-447a-9878-68cce00ab3aa, infoPort=38921, infoSecurePort=0, ipcPort=40243, storageInfo=lv=-57;cid=testClusterID;nsid=1302444194;c=1685987759018), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:55:59,613 INFO [Listener at localhost.localdomain/40243] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37363 2023-06-05 17:55:59,623 WARN [Listener at localhost.localdomain/45889] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:55:59,686 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2af83f43e698ec0c: Processing first storage report for DS-501a792a-f7ea-4733-a391-4a461ff891d3 from datanode 177fcb07-d942-4d3e-900d-b296ed159ac6 2023-06-05 17:55:59,686 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2af83f43e698ec0c: from storage DS-501a792a-f7ea-4733-a391-4a461ff891d3 node DatanodeRegistration(127.0.0.1:44455, datanodeUuid=177fcb07-d942-4d3e-900d-b296ed159ac6, infoPort=46811, infoSecurePort=0, ipcPort=45889, storageInfo=lv=-57;cid=testClusterID;nsid=1302444194;c=1685987759018), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:55:59,686 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2af83f43e698ec0c: Processing first storage report for DS-ad2bf741-24b6-422a-b9d9-c8368d4c6d18 from datanode 177fcb07-d942-4d3e-900d-b296ed159ac6 2023-06-05 17:55:59,686 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2af83f43e698ec0c: from storage DS-ad2bf741-24b6-422a-b9d9-c8368d4c6d18 node DatanodeRegistration(127.0.0.1:44455, datanodeUuid=177fcb07-d942-4d3e-900d-b296ed159ac6, infoPort=46811, infoSecurePort=0, ipcPort=45889, storageInfo=lv=-57;cid=testClusterID;nsid=1302444194;c=1685987759018), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:55:59,747 DEBUG [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9 2023-06-05 17:55:59,750 INFO [Listener at localhost.localdomain/45889] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/cluster_5c6fa081-b821-ddf8-f727-131e46dca619/zookeeper_0, clientPort=49402, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/cluster_5c6fa081-b821-ddf8-f727-131e46dca619/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/cluster_5c6fa081-b821-ddf8-f727-131e46dca619/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-05 17:55:59,751 INFO [Listener at localhost.localdomain/45889] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=49402 2023-06-05 17:55:59,752 INFO [Listener at localhost.localdomain/45889] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:55:59,753 INFO [Listener at localhost.localdomain/45889] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:55:59,771 INFO [Listener at localhost.localdomain/45889] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338 with version=8 2023-06-05 17:55:59,771 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/hbase-staging 2023-06-05 17:55:59,773 INFO [Listener at localhost.localdomain/45889] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-06-05 17:55:59,773 INFO [Listener at localhost.localdomain/45889] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:55:59,773 INFO [Listener at localhost.localdomain/45889] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-05 17:55:59,773 INFO [Listener at localhost.localdomain/45889] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-05 17:55:59,773 INFO [Listener at localhost.localdomain/45889] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:55:59,773 INFO [Listener at localhost.localdomain/45889] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-05 17:55:59,774 INFO [Listener at localhost.localdomain/45889] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-05 17:55:59,775 INFO [Listener at localhost.localdomain/45889] ipc.NettyRpcServer(120): Bind to /148.251.75.209:39069 2023-06-05 17:55:59,775 INFO [Listener at localhost.localdomain/45889] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:55:59,776 INFO [Listener at localhost.localdomain/45889] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:55:59,777 INFO [Listener at localhost.localdomain/45889] zookeeper.RecoverableZooKeeper(93): Process identifier=master:39069 connecting to ZooKeeper ensemble=127.0.0.1:49402 2023-06-05 17:55:59,782 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:390690x0, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-05 17:55:59,783 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:39069-0x101bc69aeb10000 connected 2023-06-05 17:55:59,793 DEBUG [Listener at localhost.localdomain/45889] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-05 17:55:59,794 DEBUG [Listener at localhost.localdomain/45889] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:55:59,794 DEBUG [Listener at localhost.localdomain/45889] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-05 17:55:59,795 DEBUG [Listener at localhost.localdomain/45889] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39069 2023-06-05 17:55:59,797 DEBUG [Listener at localhost.localdomain/45889] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39069 2023-06-05 17:55:59,798 DEBUG [Listener at localhost.localdomain/45889] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39069 2023-06-05 17:55:59,798 DEBUG [Listener at localhost.localdomain/45889] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39069 2023-06-05 17:55:59,798 DEBUG [Listener at localhost.localdomain/45889] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39069 2023-06-05 17:55:59,798 INFO [Listener at localhost.localdomain/45889] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338, hbase.cluster.distributed=false 2023-06-05 17:55:59,811 INFO [Listener at localhost.localdomain/45889] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-06-05 17:55:59,811 INFO [Listener at localhost.localdomain/45889] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:55:59,812 INFO [Listener at localhost.localdomain/45889] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-05 17:55:59,812 INFO [Listener at localhost.localdomain/45889] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-05 17:55:59,812 INFO [Listener at localhost.localdomain/45889] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:55:59,812 INFO [Listener at localhost.localdomain/45889] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-05 17:55:59,812 INFO [Listener at localhost.localdomain/45889] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-05 17:55:59,813 INFO [Listener at localhost.localdomain/45889] ipc.NettyRpcServer(120): Bind to /148.251.75.209:38709 2023-06-05 17:55:59,813 INFO [Listener at localhost.localdomain/45889] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-05 17:55:59,814 DEBUG [Listener at localhost.localdomain/45889] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-05 17:55:59,815 INFO [Listener at localhost.localdomain/45889] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:55:59,815 INFO [Listener at localhost.localdomain/45889] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:55:59,816 INFO [Listener at localhost.localdomain/45889] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38709 connecting to ZooKeeper ensemble=127.0.0.1:49402 2023-06-05 17:55:59,819 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:387090x0, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-05 17:55:59,820 DEBUG [Listener at localhost.localdomain/45889] zookeeper.ZKUtil(164): regionserver:387090x0, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-05 17:55:59,820 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38709-0x101bc69aeb10001 connected 2023-06-05 17:55:59,821 DEBUG [Listener at localhost.localdomain/45889] zookeeper.ZKUtil(164): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:55:59,821 DEBUG [Listener at localhost.localdomain/45889] zookeeper.ZKUtil(164): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-05 17:55:59,822 DEBUG [Listener at localhost.localdomain/45889] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38709 2023-06-05 17:55:59,822 DEBUG [Listener at localhost.localdomain/45889] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38709 2023-06-05 17:55:59,822 DEBUG [Listener at localhost.localdomain/45889] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38709 2023-06-05 17:55:59,822 DEBUG [Listener at localhost.localdomain/45889] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38709 2023-06-05 17:55:59,823 DEBUG [Listener at localhost.localdomain/45889] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38709 2023-06-05 17:55:59,823 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,39069,1685987759772 2023-06-05 17:55:59,824 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-05 17:55:59,825 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,39069,1685987759772 2023-06-05 17:55:59,826 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-05 17:55:59,826 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-05 17:55:59,826 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:55:59,826 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-05 17:55:59,827 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,39069,1685987759772 from backup master directory 2023-06-05 17:55:59,827 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-05 17:55:59,829 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,39069,1685987759772 2023-06-05 17:55:59,829 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-05 17:55:59,829 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-05 17:55:59,829 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,39069,1685987759772 2023-06-05 17:55:59,842 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/hbase.id with ID: 53dc7b23-2a93-4273-ae8b-1ff3e912af97 2023-06-05 17:55:59,855 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:55:59,857 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:55:59,866 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x3d65ba0b to 127.0.0.1:49402 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-05 17:55:59,875 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@73bb9347, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-05 17:55:59,875 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-05 17:55:59,876 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-05 17:55:59,876 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-05 17:55:59,878 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/MasterData/data/master/store-tmp 2023-06-05 17:55:59,886 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-05 17:55:59,886 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:55:59,886 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-05 17:55:59,886 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:55:59,886 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:55:59,886 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-05 17:55:59,886 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:55:59,886 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:55:59,886 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-05 17:55:59,887 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/MasterData/WALs/jenkins-hbase20.apache.org,39069,1685987759772 2023-06-05 17:55:59,889 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C39069%2C1685987759772, suffix=, logDir=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/MasterData/WALs/jenkins-hbase20.apache.org,39069,1685987759772, archiveDir=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/MasterData/oldWALs, maxLogs=10 2023-06-05 17:55:59,895 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/MasterData/WALs/jenkins-hbase20.apache.org,39069,1685987759772/jenkins-hbase20.apache.org%2C39069%2C1685987759772.1685987759889 2023-06-05 17:55:59,896 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44455,DS-501a792a-f7ea-4733-a391-4a461ff891d3,DISK], DatanodeInfoWithStorage[127.0.0.1:35765,DS-089525b5-2642-4891-923e-cbf430a2fb2c,DISK]] 2023-06-05 17:55:59,896 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-05 17:55:59,896 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:55:59,896 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:55:59,896 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:55:59,898 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:55:59,899 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-05 17:55:59,899 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-05 17:55:59,900 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:55:59,901 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:55:59,901 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:55:59,904 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:55:59,906 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-05 17:55:59,907 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=717056, jitterRate=-0.08821682631969452}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-05 17:55:59,907 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-05 17:55:59,907 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-05 17:55:59,908 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-05 17:55:59,908 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-05 17:55:59,908 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-05 17:55:59,908 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-05 17:55:59,909 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-05 17:55:59,909 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-05 17:55:59,909 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-05 17:55:59,910 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-05 17:55:59,921 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-05 17:55:59,921 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-05 17:55:59,922 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-05 17:55:59,922 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-05 17:55:59,922 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-05 17:55:59,924 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:55:59,924 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-05 17:55:59,925 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-05 17:55:59,925 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-05 17:55:59,926 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-05 17:55:59,926 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-05 17:55:59,926 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:55:59,928 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,39069,1685987759772, sessionid=0x101bc69aeb10000, setting cluster-up flag (Was=false) 2023-06-05 17:55:59,929 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:55:59,932 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-05 17:55:59,933 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,39069,1685987759772 2023-06-05 17:55:59,935 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:55:59,938 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-05 17:55:59,938 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,39069,1685987759772 2023-06-05 17:55:59,939 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/.hbase-snapshot/.tmp 2023-06-05 17:55:59,944 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-05 17:55:59,945 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-05 17:55:59,945 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-05 17:55:59,945 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-05 17:55:59,945 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-05 17:55:59,945 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-06-05 17:55:59,945 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:55:59,945 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-05 17:55:59,945 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:55:59,947 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685987789947 2023-06-05 17:55:59,947 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-05 17:55:59,947 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-05 17:55:59,948 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-05 17:55:59,948 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-05 17:55:59,948 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-05 17:55:59,948 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-05 17:55:59,948 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-05 17:55:59,949 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-05 17:55:59,949 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-05 17:55:59,949 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-05 17:55:59,949 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-05 17:55:59,949 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-05 17:55:59,949 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-05 17:55:59,949 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-05 17:55:59,949 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685987759949,5,FailOnTimeoutGroup] 2023-06-05 17:55:59,950 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685987759949,5,FailOnTimeoutGroup] 2023-06-05 17:55:59,950 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-05 17:55:59,950 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-05 17:55:59,950 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-05 17:55:59,950 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-05 17:55:59,950 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-05 17:55:59,960 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-05 17:55:59,961 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-05 17:55:59,961 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338 2023-06-05 17:55:59,967 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:55:59,969 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-05 17:55:59,970 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/1588230740/info 2023-06-05 17:55:59,971 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-05 17:55:59,971 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:55:59,971 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-05 17:55:59,972 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/1588230740/rep_barrier 2023-06-05 17:55:59,973 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-05 17:55:59,973 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:55:59,973 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-05 17:55:59,975 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/1588230740/table 2023-06-05 17:55:59,975 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-05 17:55:59,975 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:55:59,976 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/1588230740 2023-06-05 17:55:59,976 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/1588230740 2023-06-05 17:55:59,979 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-05 17:55:59,980 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-05 17:55:59,982 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-05 17:55:59,983 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=796912, jitterRate=0.013326093554496765}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-05 17:55:59,983 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-05 17:55:59,983 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-05 17:55:59,983 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-05 17:55:59,983 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-05 17:55:59,983 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-05 17:55:59,983 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-05 17:55:59,984 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-05 17:55:59,984 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-05 17:55:59,985 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-05 17:55:59,985 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-05 17:55:59,985 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-05 17:55:59,987 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-05 17:55:59,989 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-05 17:56:00,025 INFO [RS:0;jenkins-hbase20:38709] regionserver.HRegionServer(951): ClusterId : 53dc7b23-2a93-4273-ae8b-1ff3e912af97 2023-06-05 17:56:00,026 DEBUG [RS:0;jenkins-hbase20:38709] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-05 17:56:00,028 DEBUG [RS:0;jenkins-hbase20:38709] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-05 17:56:00,028 DEBUG [RS:0;jenkins-hbase20:38709] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-05 17:56:00,030 DEBUG [RS:0;jenkins-hbase20:38709] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-05 17:56:00,031 DEBUG [RS:0;jenkins-hbase20:38709] zookeeper.ReadOnlyZKClient(139): Connect 0x3f7cf002 to 127.0.0.1:49402 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-05 17:56:00,042 DEBUG [RS:0;jenkins-hbase20:38709] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@31bb099d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-05 17:56:00,042 DEBUG [RS:0;jenkins-hbase20:38709] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@35849b72, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-05 17:56:00,054 DEBUG [RS:0;jenkins-hbase20:38709] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:38709 2023-06-05 17:56:00,054 INFO [RS:0;jenkins-hbase20:38709] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-05 17:56:00,054 INFO [RS:0;jenkins-hbase20:38709] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-05 17:56:00,054 DEBUG [RS:0;jenkins-hbase20:38709] regionserver.HRegionServer(1022): About to register with Master. 2023-06-05 17:56:00,054 INFO [RS:0;jenkins-hbase20:38709] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,39069,1685987759772 with isa=jenkins-hbase20.apache.org/148.251.75.209:38709, startcode=1685987759811 2023-06-05 17:56:00,054 DEBUG [RS:0;jenkins-hbase20:38709] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-05 17:56:00,059 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:49353, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-06-05 17:56:00,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:00,061 DEBUG [RS:0;jenkins-hbase20:38709] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338 2023-06-05 17:56:00,061 DEBUG [RS:0;jenkins-hbase20:38709] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:38221 2023-06-05 17:56:00,061 DEBUG [RS:0;jenkins-hbase20:38709] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-05 17:56:00,062 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-05 17:56:00,063 DEBUG [RS:0;jenkins-hbase20:38709] zookeeper.ZKUtil(162): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:00,063 WARN [RS:0;jenkins-hbase20:38709] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-05 17:56:00,063 INFO [RS:0;jenkins-hbase20:38709] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-05 17:56:00,063 DEBUG [RS:0;jenkins-hbase20:38709] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/WALs/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:00,063 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,38709,1685987759811] 2023-06-05 17:56:00,066 DEBUG [RS:0;jenkins-hbase20:38709] zookeeper.ZKUtil(162): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:00,067 DEBUG [RS:0;jenkins-hbase20:38709] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-05 17:56:00,067 INFO [RS:0;jenkins-hbase20:38709] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-05 17:56:00,069 INFO [RS:0;jenkins-hbase20:38709] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-05 17:56:00,069 INFO [RS:0;jenkins-hbase20:38709] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-05 17:56:00,069 INFO [RS:0;jenkins-hbase20:38709] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-05 17:56:00,069 INFO [RS:0;jenkins-hbase20:38709] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-05 17:56:00,070 INFO [RS:0;jenkins-hbase20:38709] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-05 17:56:00,071 DEBUG [RS:0;jenkins-hbase20:38709] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:56:00,071 DEBUG [RS:0;jenkins-hbase20:38709] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:56:00,071 DEBUG [RS:0;jenkins-hbase20:38709] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:56:00,071 DEBUG [RS:0;jenkins-hbase20:38709] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:56:00,071 DEBUG [RS:0;jenkins-hbase20:38709] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:56:00,071 DEBUG [RS:0;jenkins-hbase20:38709] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-05 17:56:00,071 DEBUG [RS:0;jenkins-hbase20:38709] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:56:00,071 DEBUG [RS:0;jenkins-hbase20:38709] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:56:00,071 DEBUG [RS:0;jenkins-hbase20:38709] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:56:00,072 DEBUG [RS:0;jenkins-hbase20:38709] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:56:00,072 INFO [RS:0;jenkins-hbase20:38709] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-05 17:56:00,072 INFO [RS:0;jenkins-hbase20:38709] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-05 17:56:00,072 INFO [RS:0;jenkins-hbase20:38709] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-05 17:56:00,084 INFO [RS:0;jenkins-hbase20:38709] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-05 17:56:00,084 INFO [RS:0;jenkins-hbase20:38709] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,38709,1685987759811-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-05 17:56:00,094 INFO [RS:0;jenkins-hbase20:38709] regionserver.Replication(203): jenkins-hbase20.apache.org,38709,1685987759811 started 2023-06-05 17:56:00,094 INFO [RS:0;jenkins-hbase20:38709] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,38709,1685987759811, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:38709, sessionid=0x101bc69aeb10001 2023-06-05 17:56:00,094 DEBUG [RS:0;jenkins-hbase20:38709] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-05 17:56:00,094 DEBUG [RS:0;jenkins-hbase20:38709] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:00,094 DEBUG [RS:0;jenkins-hbase20:38709] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,38709,1685987759811' 2023-06-05 17:56:00,094 DEBUG [RS:0;jenkins-hbase20:38709] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-05 17:56:00,095 DEBUG [RS:0;jenkins-hbase20:38709] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-05 17:56:00,095 DEBUG [RS:0;jenkins-hbase20:38709] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-05 17:56:00,095 DEBUG [RS:0;jenkins-hbase20:38709] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-05 17:56:00,095 DEBUG [RS:0;jenkins-hbase20:38709] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:00,095 DEBUG [RS:0;jenkins-hbase20:38709] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,38709,1685987759811' 2023-06-05 17:56:00,096 DEBUG [RS:0;jenkins-hbase20:38709] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-05 17:56:00,096 DEBUG [RS:0;jenkins-hbase20:38709] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-05 17:56:00,096 DEBUG [RS:0;jenkins-hbase20:38709] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-05 17:56:00,096 INFO [RS:0;jenkins-hbase20:38709] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-05 17:56:00,096 INFO [RS:0;jenkins-hbase20:38709] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-05 17:56:00,139 DEBUG [jenkins-hbase20:39069] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-05 17:56:00,140 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,38709,1685987759811, state=OPENING 2023-06-05 17:56:00,141 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-05 17:56:00,142 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:56:00,143 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,38709,1685987759811}] 2023-06-05 17:56:00,143 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-05 17:56:00,200 INFO [RS:0;jenkins-hbase20:38709] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C38709%2C1685987759811, suffix=, logDir=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/WALs/jenkins-hbase20.apache.org,38709,1685987759811, archiveDir=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/oldWALs, maxLogs=32 2023-06-05 17:56:00,212 INFO [RS:0;jenkins-hbase20:38709] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/WALs/jenkins-hbase20.apache.org,38709,1685987759811/jenkins-hbase20.apache.org%2C38709%2C1685987759811.1685987760201 2023-06-05 17:56:00,212 DEBUG [RS:0;jenkins-hbase20:38709] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35765,DS-089525b5-2642-4891-923e-cbf430a2fb2c,DISK], DatanodeInfoWithStorage[127.0.0.1:44455,DS-501a792a-f7ea-4733-a391-4a461ff891d3,DISK]] 2023-06-05 17:56:00,297 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:00,297 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-05 17:56:00,302 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:36284, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-05 17:56:00,309 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-05 17:56:00,309 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-05 17:56:00,313 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C38709%2C1685987759811.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/WALs/jenkins-hbase20.apache.org,38709,1685987759811, archiveDir=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/oldWALs, maxLogs=32 2023-06-05 17:56:00,324 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/WALs/jenkins-hbase20.apache.org,38709,1685987759811/jenkins-hbase20.apache.org%2C38709%2C1685987759811.meta.1685987760313.meta 2023-06-05 17:56:00,324 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35765,DS-089525b5-2642-4891-923e-cbf430a2fb2c,DISK], DatanodeInfoWithStorage[127.0.0.1:44455,DS-501a792a-f7ea-4733-a391-4a461ff891d3,DISK]] 2023-06-05 17:56:00,324 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-05 17:56:00,324 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-05 17:56:00,324 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-05 17:56:00,325 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-05 17:56:00,325 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-05 17:56:00,325 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:56:00,325 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-05 17:56:00,325 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-05 17:56:00,326 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-05 17:56:00,328 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/1588230740/info 2023-06-05 17:56:00,328 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/1588230740/info 2023-06-05 17:56:00,328 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-05 17:56:00,329 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:56:00,329 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-05 17:56:00,330 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/1588230740/rep_barrier 2023-06-05 17:56:00,330 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/1588230740/rep_barrier 2023-06-05 17:56:00,331 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-05 17:56:00,331 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:56:00,331 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-05 17:56:00,333 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/1588230740/table 2023-06-05 17:56:00,333 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/1588230740/table 2023-06-05 17:56:00,334 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-05 17:56:00,335 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:56:00,336 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/1588230740 2023-06-05 17:56:00,338 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/1588230740 2023-06-05 17:56:00,340 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-05 17:56:00,341 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-05 17:56:00,342 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=804241, jitterRate=0.02264651656150818}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-05 17:56:00,342 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-05 17:56:00,344 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685987760297 2023-06-05 17:56:00,348 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-05 17:56:00,348 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-05 17:56:00,349 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,38709,1685987759811, state=OPEN 2023-06-05 17:56:00,350 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-05 17:56:00,350 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-05 17:56:00,353 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-05 17:56:00,354 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,38709,1685987759811 in 207 msec 2023-06-05 17:56:00,357 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-05 17:56:00,357 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 369 msec 2023-06-05 17:56:00,359 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 416 msec 2023-06-05 17:56:00,359 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685987760359, completionTime=-1 2023-06-05 17:56:00,359 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-05 17:56:00,359 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-05 17:56:00,363 DEBUG [hconnection-0x5326e4c8-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-05 17:56:00,365 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:36288, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-05 17:56:00,366 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-05 17:56:00,366 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685987820366 2023-06-05 17:56:00,366 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685987880366 2023-06-05 17:56:00,366 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-06-05 17:56:00,373 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39069,1685987759772-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-05 17:56:00,373 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39069,1685987759772-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-05 17:56:00,373 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39069,1685987759772-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-05 17:56:00,373 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:39069, period=300000, unit=MILLISECONDS is enabled. 2023-06-05 17:56:00,373 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-05 17:56:00,373 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-05 17:56:00,373 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-05 17:56:00,374 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-05 17:56:00,376 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-05 17:56:00,379 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-05 17:56:00,380 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-05 17:56:00,382 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/.tmp/data/hbase/namespace/5e46ec87f8619a618947b087a98d5b44 2023-06-05 17:56:00,383 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/.tmp/data/hbase/namespace/5e46ec87f8619a618947b087a98d5b44 empty. 2023-06-05 17:56:00,384 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/.tmp/data/hbase/namespace/5e46ec87f8619a618947b087a98d5b44 2023-06-05 17:56:00,384 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-05 17:56:00,395 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-05 17:56:00,396 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5e46ec87f8619a618947b087a98d5b44, NAME => 'hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/.tmp 2023-06-05 17:56:00,405 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:56:00,405 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 5e46ec87f8619a618947b087a98d5b44, disabling compactions & flushes 2023-06-05 17:56:00,405 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44. 2023-06-05 17:56:00,405 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44. 2023-06-05 17:56:00,405 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44. after waiting 0 ms 2023-06-05 17:56:00,405 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44. 2023-06-05 17:56:00,405 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44. 2023-06-05 17:56:00,405 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 5e46ec87f8619a618947b087a98d5b44: 2023-06-05 17:56:00,408 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-05 17:56:00,409 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685987760409"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685987760409"}]},"ts":"1685987760409"} 2023-06-05 17:56:00,412 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-05 17:56:00,413 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-05 17:56:00,413 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685987760413"}]},"ts":"1685987760413"} 2023-06-05 17:56:00,415 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-05 17:56:00,419 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=5e46ec87f8619a618947b087a98d5b44, ASSIGN}] 2023-06-05 17:56:00,422 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=5e46ec87f8619a618947b087a98d5b44, ASSIGN 2023-06-05 17:56:00,423 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=5e46ec87f8619a618947b087a98d5b44, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,38709,1685987759811; forceNewPlan=false, retain=false 2023-06-05 17:56:00,573 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=5e46ec87f8619a618947b087a98d5b44, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:00,574 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685987760573"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685987760573"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685987760573"}]},"ts":"1685987760573"} 2023-06-05 17:56:00,576 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 5e46ec87f8619a618947b087a98d5b44, server=jenkins-hbase20.apache.org,38709,1685987759811}] 2023-06-05 17:56:00,735 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44. 2023-06-05 17:56:00,735 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5e46ec87f8619a618947b087a98d5b44, NAME => 'hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44.', STARTKEY => '', ENDKEY => ''} 2023-06-05 17:56:00,736 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 5e46ec87f8619a618947b087a98d5b44 2023-06-05 17:56:00,736 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:56:00,736 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 5e46ec87f8619a618947b087a98d5b44 2023-06-05 17:56:00,736 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 5e46ec87f8619a618947b087a98d5b44 2023-06-05 17:56:00,737 INFO [StoreOpener-5e46ec87f8619a618947b087a98d5b44-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 5e46ec87f8619a618947b087a98d5b44 2023-06-05 17:56:00,739 DEBUG [StoreOpener-5e46ec87f8619a618947b087a98d5b44-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/namespace/5e46ec87f8619a618947b087a98d5b44/info 2023-06-05 17:56:00,739 DEBUG [StoreOpener-5e46ec87f8619a618947b087a98d5b44-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/namespace/5e46ec87f8619a618947b087a98d5b44/info 2023-06-05 17:56:00,739 INFO [StoreOpener-5e46ec87f8619a618947b087a98d5b44-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5e46ec87f8619a618947b087a98d5b44 columnFamilyName info 2023-06-05 17:56:00,740 INFO [StoreOpener-5e46ec87f8619a618947b087a98d5b44-1] regionserver.HStore(310): Store=5e46ec87f8619a618947b087a98d5b44/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:56:00,741 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/namespace/5e46ec87f8619a618947b087a98d5b44 2023-06-05 17:56:00,741 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/namespace/5e46ec87f8619a618947b087a98d5b44 2023-06-05 17:56:00,744 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 5e46ec87f8619a618947b087a98d5b44 2023-06-05 17:56:00,750 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/namespace/5e46ec87f8619a618947b087a98d5b44/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-05 17:56:00,751 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 5e46ec87f8619a618947b087a98d5b44; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=872433, jitterRate=0.10935673117637634}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-05 17:56:00,751 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 5e46ec87f8619a618947b087a98d5b44: 2023-06-05 17:56:00,753 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44., pid=6, masterSystemTime=1685987760728 2023-06-05 17:56:00,755 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44. 2023-06-05 17:56:00,755 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44. 2023-06-05 17:56:00,756 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=5e46ec87f8619a618947b087a98d5b44, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:00,757 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685987760756"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685987760756"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685987760756"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685987760756"}]},"ts":"1685987760756"} 2023-06-05 17:56:00,762 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-05 17:56:00,762 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 5e46ec87f8619a618947b087a98d5b44, server=jenkins-hbase20.apache.org,38709,1685987759811 in 183 msec 2023-06-05 17:56:00,765 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-05 17:56:00,765 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=5e46ec87f8619a618947b087a98d5b44, ASSIGN in 343 msec 2023-06-05 17:56:00,767 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-05 17:56:00,767 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685987760767"}]},"ts":"1685987760767"} 2023-06-05 17:56:00,769 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-05 17:56:00,771 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-05 17:56:00,774 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 398 msec 2023-06-05 17:56:00,777 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-05 17:56:00,785 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-05 17:56:00,785 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:56:00,790 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-05 17:56:00,812 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-05 17:56:00,815 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-05 17:56:00,820 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 29 msec 2023-06-05 17:56:00,822 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-05 17:56:00,832 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-05 17:56:00,836 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 14 msec 2023-06-05 17:56:00,850 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-05 17:56:00,851 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-05 17:56:00,851 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.022sec 2023-06-05 17:56:00,851 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-05 17:56:00,851 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-05 17:56:00,851 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-05 17:56:00,851 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39069,1685987759772-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-05 17:56:00,851 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39069,1685987759772-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-05 17:56:00,853 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-05 17:56:00,925 DEBUG [Listener at localhost.localdomain/45889] zookeeper.ReadOnlyZKClient(139): Connect 0x30b7100c to 127.0.0.1:49402 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-05 17:56:00,929 DEBUG [Listener at localhost.localdomain/45889] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@72a2ab3b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-05 17:56:00,930 DEBUG [hconnection-0x655dcbda-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-05 17:56:00,937 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:55096, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-05 17:56:00,938 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,39069,1685987759772 2023-06-05 17:56:00,939 INFO [Listener at localhost.localdomain/45889] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:56:00,944 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-05 17:56:00,944 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:56:00,945 INFO [Listener at localhost.localdomain/45889] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-05 17:56:00,949 DEBUG [Listener at localhost.localdomain/45889] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-05 17:56:00,952 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:56184, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-05 17:56:00,954 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-05 17:56:00,954 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-05 17:56:00,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-05 17:56:00,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:00,958 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-06-05 17:56:00,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testCompactionRecordDoesntBlockRolling" procId is: 9 2023-06-05 17:56:00,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-05 17:56:00,961 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-05 17:56:00,964 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780 2023-06-05 17:56:00,964 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780 empty. 2023-06-05 17:56:00,964 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780 2023-06-05 17:56:00,964 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testCompactionRecordDoesntBlockRolling regions 2023-06-05 17:56:00,975 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/.tabledesc/.tableinfo.0000000001 2023-06-05 17:56:00,977 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3cbe6c99aae1a6ebd336b87dd1388780, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/.tmp 2023-06-05 17:56:00,987 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:56:00,987 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1604): Closing 3cbe6c99aae1a6ebd336b87dd1388780, disabling compactions & flushes 2023-06-05 17:56:00,987 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. 2023-06-05 17:56:00,987 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. 2023-06-05 17:56:00,987 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. after waiting 0 ms 2023-06-05 17:56:00,987 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. 2023-06-05 17:56:00,987 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. 2023-06-05 17:56:00,987 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1558): Region close journal for 3cbe6c99aae1a6ebd336b87dd1388780: 2023-06-05 17:56:00,990 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ADD_TO_META 2023-06-05 17:56:00,991 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685987760991"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685987760991"}]},"ts":"1685987760991"} 2023-06-05 17:56:00,992 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-05 17:56:00,993 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-05 17:56:00,993 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685987760993"}]},"ts":"1685987760993"} 2023-06-05 17:56:00,995 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLING in hbase:meta 2023-06-05 17:56:00,998 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=3cbe6c99aae1a6ebd336b87dd1388780, ASSIGN}] 2023-06-05 17:56:01,000 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=3cbe6c99aae1a6ebd336b87dd1388780, ASSIGN 2023-06-05 17:56:01,001 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=3cbe6c99aae1a6ebd336b87dd1388780, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,38709,1685987759811; forceNewPlan=false, retain=false 2023-06-05 17:56:01,153 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=3cbe6c99aae1a6ebd336b87dd1388780, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:01,153 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685987761153"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685987761153"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685987761153"}]},"ts":"1685987761153"} 2023-06-05 17:56:01,157 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 3cbe6c99aae1a6ebd336b87dd1388780, server=jenkins-hbase20.apache.org,38709,1685987759811}] 2023-06-05 17:56:01,316 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. 2023-06-05 17:56:01,316 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3cbe6c99aae1a6ebd336b87dd1388780, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780.', STARTKEY => '', ENDKEY => ''} 2023-06-05 17:56:01,317 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testCompactionRecordDoesntBlockRolling 3cbe6c99aae1a6ebd336b87dd1388780 2023-06-05 17:56:01,317 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:56:01,317 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 3cbe6c99aae1a6ebd336b87dd1388780 2023-06-05 17:56:01,317 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 3cbe6c99aae1a6ebd336b87dd1388780 2023-06-05 17:56:01,319 INFO [StoreOpener-3cbe6c99aae1a6ebd336b87dd1388780-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 3cbe6c99aae1a6ebd336b87dd1388780 2023-06-05 17:56:01,322 DEBUG [StoreOpener-3cbe6c99aae1a6ebd336b87dd1388780-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info 2023-06-05 17:56:01,322 DEBUG [StoreOpener-3cbe6c99aae1a6ebd336b87dd1388780-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info 2023-06-05 17:56:01,322 INFO [StoreOpener-3cbe6c99aae1a6ebd336b87dd1388780-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3cbe6c99aae1a6ebd336b87dd1388780 columnFamilyName info 2023-06-05 17:56:01,323 INFO [StoreOpener-3cbe6c99aae1a6ebd336b87dd1388780-1] regionserver.HStore(310): Store=3cbe6c99aae1a6ebd336b87dd1388780/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:56:01,325 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780 2023-06-05 17:56:01,325 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780 2023-06-05 17:56:01,329 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 3cbe6c99aae1a6ebd336b87dd1388780 2023-06-05 17:56:01,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-05 17:56:01,333 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 3cbe6c99aae1a6ebd336b87dd1388780; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=767901, jitterRate=-0.02356421947479248}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-05 17:56:01,333 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 3cbe6c99aae1a6ebd336b87dd1388780: 2023-06-05 17:56:01,334 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780., pid=11, masterSystemTime=1685987761311 2023-06-05 17:56:01,336 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. 2023-06-05 17:56:01,336 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. 2023-06-05 17:56:01,337 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=3cbe6c99aae1a6ebd336b87dd1388780, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:01,337 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685987761337"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685987761337"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685987761337"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685987761337"}]},"ts":"1685987761337"} 2023-06-05 17:56:01,342 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-05 17:56:01,342 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 3cbe6c99aae1a6ebd336b87dd1388780, server=jenkins-hbase20.apache.org,38709,1685987759811 in 182 msec 2023-06-05 17:56:01,344 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-05 17:56:01,345 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=3cbe6c99aae1a6ebd336b87dd1388780, ASSIGN in 344 msec 2023-06-05 17:56:01,346 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-05 17:56:01,346 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685987761346"}]},"ts":"1685987761346"} 2023-06-05 17:56:01,348 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLED in hbase:meta 2023-06-05 17:56:01,350 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_POST_OPERATION 2023-06-05 17:56:01,352 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling in 395 msec 2023-06-05 17:56:05,881 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-05 17:56:06,068 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-05 17:56:10,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-05 17:56:10,962 INFO [Listener at localhost.localdomain/45889] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testCompactionRecordDoesntBlockRolling, procId: 9 completed 2023-06-05 17:56:10,967 DEBUG [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:10,968 DEBUG [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. 2023-06-05 17:56:10,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-06-05 17:56:10,990 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] procedure.ProcedureCoordinator(165): Submitting procedure hbase:namespace 2023-06-05 17:56:10,991 INFO [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'hbase:namespace' 2023-06-05 17:56:10,991 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-05 17:56:10,991 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'hbase:namespace' starting 'acquire' 2023-06-05 17:56:10,991 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'hbase:namespace', kicking off acquire phase on members. 2023-06-05 17:56:10,992 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-06-05 17:56:10,992 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-05 17:56:10,993 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-05 17:56:10,993 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:10,993 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-05 17:56:10,993 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-05 17:56:10,993 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:10,993 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/hbase:namespace 2023-06-05 17:56:10,993 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-05 17:56:10,993 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-06-05 17:56:10,994 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-05 17:56:10,994 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-05 17:56:10,994 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for hbase:namespace 2023-06-05 17:56:10,996 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:hbase:namespace 2023-06-05 17:56:10,996 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'hbase:namespace' with timeout 60000ms 2023-06-05 17:56:10,996 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-05 17:56:10,997 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'hbase:namespace' starting 'acquire' stage 2023-06-05 17:56:10,998 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-05 17:56:10,998 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-05 17:56:10,998 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44. 2023-06-05 17:56:10,998 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44. started... 2023-06-05 17:56:10,999 INFO [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 5e46ec87f8619a618947b087a98d5b44 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-05 17:56:11,012 INFO [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/namespace/5e46ec87f8619a618947b087a98d5b44/.tmp/info/7c51c587ec4a469092ad5959870b64a5 2023-06-05 17:56:11,020 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/namespace/5e46ec87f8619a618947b087a98d5b44/.tmp/info/7c51c587ec4a469092ad5959870b64a5 as hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/namespace/5e46ec87f8619a618947b087a98d5b44/info/7c51c587ec4a469092ad5959870b64a5 2023-06-05 17:56:11,026 INFO [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/namespace/5e46ec87f8619a618947b087a98d5b44/info/7c51c587ec4a469092ad5959870b64a5, entries=2, sequenceid=6, filesize=4.8 K 2023-06-05 17:56:11,027 INFO [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 5e46ec87f8619a618947b087a98d5b44 in 28ms, sequenceid=6, compaction requested=false 2023-06-05 17:56:11,028 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 5e46ec87f8619a618947b087a98d5b44: 2023-06-05 17:56:11,028 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44. 2023-06-05 17:56:11,028 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-05 17:56:11,028 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-05 17:56:11,028 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:11,028 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'hbase:namespace' locally acquired 2023-06-05 17:56:11,028 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,38709,1685987759811' joining acquired barrier for procedure (hbase:namespace) in zk 2023-06-05 17:56:11,030 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-05 17:56:11,030 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:11,030 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:11,030 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-05 17:56:11,030 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-05 17:56:11,030 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace 2023-06-05 17:56:11,030 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'hbase:namespace' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-05 17:56:11,030 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-05 17:56:11,031 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-05 17:56:11,031 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-05 17:56:11,031 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:11,032 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-05 17:56:11,032 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,38709,1685987759811' joining acquired barrier for procedure 'hbase:namespace' on coordinator 2023-06-05 17:56:11,032 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@32367c81[Count = 0] remaining members to acquire global barrier 2023-06-05 17:56:11,033 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'hbase:namespace' starting 'in-barrier' execution. 2023-06-05 17:56:11,033 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-05 17:56:11,034 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-06-05 17:56:11,034 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-05 17:56:11,034 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-05 17:56:11,034 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'hbase:namespace' received 'reached' from coordinator. 2023-06-05 17:56:11,034 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'hbase:namespace' locally completed 2023-06-05 17:56:11,034 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'hbase:namespace' completed for member 'jenkins-hbase20.apache.org,38709,1685987759811' in zk 2023-06-05 17:56:11,034 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:11,034 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-05 17:56:11,035 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:11,035 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'hbase:namespace' has notified controller of completion 2023-06-05 17:56:11,035 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:11,036 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-05 17:56:11,036 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-05 17:56:11,035 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-05 17:56:11,036 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'hbase:namespace' completed. 2023-06-05 17:56:11,036 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-05 17:56:11,036 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-05 17:56:11,037 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-05 17:56:11,037 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:11,037 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-05 17:56:11,038 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-05 17:56:11,038 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:11,038 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'hbase:namespace' member 'jenkins-hbase20.apache.org,38709,1685987759811': 2023-06-05 17:56:11,039 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,38709,1685987759811' released barrier for procedure'hbase:namespace', counting down latch. Waiting for 0 more 2023-06-05 17:56:11,039 INFO [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'hbase:namespace' execution completed 2023-06-05 17:56:11,039 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-05 17:56:11,039 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-05 17:56:11,039 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:hbase:namespace 2023-06-05 17:56:11,039 INFO [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure hbase:namespaceincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-05 17:56:11,040 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-06-05 17:56:11,040 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-06-05 17:56:11,040 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/hbase:namespace 2023-06-05 17:56:11,040 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/hbase:namespace 2023-06-05 17:56:11,040 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-05 17:56:11,040 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-05 17:56:11,040 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-05 17:56:11,040 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-06-05 17:56:11,041 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-05 17:56:11,041 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:11,041 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-05 17:56:11,041 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-05 17:56:11,041 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-05 17:56:11,041 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-06-05 17:56:11,041 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-05 17:56:11,042 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-05 17:56:11,042 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:11,042 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:11,043 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-05 17:56:11,043 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-05 17:56:11,043 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:11,051 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:11,051 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-05 17:56:11,051 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-05 17:56:11,051 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-05 17:56:11,051 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-05 17:56:11,051 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'hbase:namespace' 2023-06-05 17:56:11,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-05 17:56:11,051 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-05 17:56:11,051 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-05 17:56:11,052 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-05 17:56:11,053 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:11,053 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-06-05 17:56:11,053 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-06-05 17:56:11,053 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-06-05 17:56:11,053 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-05 17:56:11,053 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-05 17:56:11,054 DEBUG [Listener at localhost.localdomain/45889] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : hbase:namespace'' to complete. (max 20000 ms per retry) 2023-06-05 17:56:11,055 DEBUG [Listener at localhost.localdomain/45889] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-05 17:56:21,055 DEBUG [Listener at localhost.localdomain/45889] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-05 17:56:21,064 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-05 17:56:21,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-06-05 17:56:21,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,081 INFO [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-05 17:56:21,081 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-05 17:56:21,082 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-05 17:56:21,082 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-05 17:56:21,083 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,083 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,084 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-05 17:56:21,084 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:21,085 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-05 17:56:21,085 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-05 17:56:21,085 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:21,085 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-05 17:56:21,085 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,086 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,087 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-05 17:56:21,087 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,087 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,087 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,087 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-05 17:56:21,089 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-05 17:56:21,090 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-05 17:56:21,090 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-05 17:56:21,090 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-05 17:56:21,090 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. 2023-06-05 17:56:21,090 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. started... 2023-06-05 17:56:21,091 INFO [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 3cbe6c99aae1a6ebd336b87dd1388780 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-05 17:56:21,107 INFO [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=5 (bloomFilter=true), to=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/.tmp/info/ac6e7ae660de47ffb73eab61f31f1f3d 2023-06-05 17:56:21,117 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/.tmp/info/ac6e7ae660de47ffb73eab61f31f1f3d as hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info/ac6e7ae660de47ffb73eab61f31f1f3d 2023-06-05 17:56:21,125 INFO [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info/ac6e7ae660de47ffb73eab61f31f1f3d, entries=1, sequenceid=5, filesize=5.8 K 2023-06-05 17:56:21,126 INFO [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 3cbe6c99aae1a6ebd336b87dd1388780 in 35ms, sequenceid=5, compaction requested=false 2023-06-05 17:56:21,127 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 3cbe6c99aae1a6ebd336b87dd1388780: 2023-06-05 17:56:21,127 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. 2023-06-05 17:56:21,127 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-05 17:56:21,127 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-05 17:56:21,127 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:21,127 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-05 17:56:21,127 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,38709,1685987759811' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-05 17:56:21,129 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:21,129 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,129 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:21,129 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-05 17:56:21,129 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-05 17:56:21,129 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,129 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-05 17:56:21,129 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-05 17:56:21,130 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-05 17:56:21,130 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,130 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:21,130 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-05 17:56:21,131 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,38709,1685987759811' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-05 17:56:21,131 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@3609fe1f[Count = 0] remaining members to acquire global barrier 2023-06-05 17:56:21,131 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-05 17:56:21,131 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,131 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,132 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,132 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,132 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-05 17:56:21,132 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-05 17:56:21,132 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase20.apache.org,38709,1685987759811' in zk 2023-06-05 17:56:21,132 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:21,132 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-05 17:56:21,133 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-05 17:56:21,133 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:21,133 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-05 17:56:21,133 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:21,133 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-05 17:56:21,134 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-05 17:56:21,133 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-05 17:56:21,134 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-05 17:56:21,134 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-05 17:56:21,134 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,134 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:21,135 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-05 17:56:21,135 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,135 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:21,135 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase20.apache.org,38709,1685987759811': 2023-06-05 17:56:21,135 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,38709,1685987759811' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-05 17:56:21,135 INFO [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-05 17:56:21,136 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-05 17:56:21,136 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-05 17:56:21,136 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,136 INFO [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-05 17:56:21,137 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,137 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-05 17:56:21,137 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,137 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,137 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,138 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-05 17:56:21,138 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-05 17:56:21,138 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,138 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-05 17:56:21,138 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:21,138 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-05 17:56:21,138 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-05 17:56:21,138 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,138 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,138 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-05 17:56:21,139 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,139 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:21,139 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:21,139 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-05 17:56:21,139 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,140 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:21,141 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:21,142 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,142 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-05 17:56:21,142 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,142 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-05 17:56:21,142 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-05 17:56:21,142 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-06-05 17:56:21,142 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-05 17:56:21,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-05 17:56:21,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-05 17:56:21,142 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:21,142 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-05 17:56:21,142 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,142 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,143 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:21,143 DEBUG [Listener at localhost.localdomain/45889] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-05 17:56:21,143 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-05 17:56:21,143 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-05 17:56:21,143 DEBUG [Listener at localhost.localdomain/45889] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-05 17:56:31,143 DEBUG [Listener at localhost.localdomain/45889] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-05 17:56:31,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-05 17:56:31,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-06-05 17:56:31,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-06-05 17:56:31,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,162 INFO [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-05 17:56:31,162 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-05 17:56:31,163 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-05 17:56:31,163 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-05 17:56:31,163 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,164 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,234 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:31,234 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-05 17:56:31,234 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-05 17:56:31,234 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-05 17:56:31,234 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:31,234 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-05 17:56:31,235 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,235 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,236 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-05 17:56:31,236 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,236 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,236 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-06-05 17:56:31,236 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,236 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-05 17:56:31,237 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-05 17:56:31,237 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-05 17:56:31,238 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-05 17:56:31,238 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-05 17:56:31,238 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. 2023-06-05 17:56:31,238 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. started... 2023-06-05 17:56:31,238 INFO [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 3cbe6c99aae1a6ebd336b87dd1388780 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-05 17:56:31,257 INFO [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/.tmp/info/ad7b69ee38d24ac3aa65d35fc2bd847b 2023-06-05 17:56:31,263 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/.tmp/info/ad7b69ee38d24ac3aa65d35fc2bd847b as hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info/ad7b69ee38d24ac3aa65d35fc2bd847b 2023-06-05 17:56:31,270 INFO [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info/ad7b69ee38d24ac3aa65d35fc2bd847b, entries=1, sequenceid=9, filesize=5.8 K 2023-06-05 17:56:31,271 INFO [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 3cbe6c99aae1a6ebd336b87dd1388780 in 33ms, sequenceid=9, compaction requested=false 2023-06-05 17:56:31,271 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 3cbe6c99aae1a6ebd336b87dd1388780: 2023-06-05 17:56:31,271 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. 2023-06-05 17:56:31,271 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-05 17:56:31,271 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-05 17:56:31,272 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:31,272 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-05 17:56:31,272 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,38709,1685987759811' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-05 17:56:31,273 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:31,273 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,273 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:31,273 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-05 17:56:31,274 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-05 17:56:31,274 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,274 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-05 17:56:31,274 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-05 17:56:31,274 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-05 17:56:31,274 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,275 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:31,275 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-05 17:56:31,275 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,38709,1685987759811' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-05 17:56:31,275 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@730b76ba[Count = 0] remaining members to acquire global barrier 2023-06-05 17:56:31,275 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-05 17:56:31,275 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,276 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,276 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,276 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,276 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-05 17:56:31,276 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-05 17:56:31,276 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase20.apache.org,38709,1685987759811' in zk 2023-06-05 17:56:31,276 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:31,276 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-05 17:56:31,278 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-05 17:56:31,278 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:31,278 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-05 17:56:31,278 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:31,278 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-05 17:56:31,278 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-05 17:56:31,278 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-05 17:56:31,279 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-05 17:56:31,279 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-05 17:56:31,280 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,280 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:31,280 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-05 17:56:31,281 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,281 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:31,282 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase20.apache.org,38709,1685987759811': 2023-06-05 17:56:31,282 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,38709,1685987759811' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-05 17:56:31,282 INFO [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-05 17:56:31,282 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-05 17:56:31,282 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-05 17:56:31,282 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,282 INFO [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-05 17:56:31,292 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,292 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,292 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,292 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-05 17:56:31,292 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-05 17:56:31,292 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,293 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,292 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-05 17:56:31,293 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-05 17:56:31,293 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:31,293 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-05 17:56:31,293 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-05 17:56:31,294 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,294 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,294 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-05 17:56:31,295 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,295 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:31,296 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:31,296 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-05 17:56:31,296 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,297 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:31,301 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:31,301 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-05 17:56:31,301 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,301 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-05 17:56:31,301 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-05 17:56:31,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-05 17:56:31,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-05 17:56:31,301 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-05 17:56:31,301 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-05 17:56:31,301 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,301 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:31,302 DEBUG [Listener at localhost.localdomain/45889] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-05 17:56:31,302 DEBUG [Listener at localhost.localdomain/45889] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-05 17:56:31,302 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-05 17:56:31,302 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,302 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-05 17:56:31,302 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:31,302 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,302 DEBUG [Listener at localhost.localdomain/45889] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-05 17:56:41,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-05 17:56:41,315 INFO [Listener at localhost.localdomain/45889] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/WALs/jenkins-hbase20.apache.org,38709,1685987759811/jenkins-hbase20.apache.org%2C38709%2C1685987759811.1685987760201 with entries=13, filesize=6.44 KB; new WAL /user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/WALs/jenkins-hbase20.apache.org,38709,1685987759811/jenkins-hbase20.apache.org%2C38709%2C1685987759811.1685987801306 2023-06-05 17:56:41,315 DEBUG [Listener at localhost.localdomain/45889] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44455,DS-501a792a-f7ea-4733-a391-4a461ff891d3,DISK], DatanodeInfoWithStorage[127.0.0.1:35765,DS-089525b5-2642-4891-923e-cbf430a2fb2c,DISK]] 2023-06-05 17:56:41,316 DEBUG [Listener at localhost.localdomain/45889] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/WALs/jenkins-hbase20.apache.org,38709,1685987759811/jenkins-hbase20.apache.org%2C38709%2C1685987759811.1685987760201 is not closed yet, will try archiving it next time 2023-06-05 17:56:41,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-06-05 17:56:41,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-06-05 17:56:41,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,324 INFO [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-05 17:56:41,324 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-05 17:56:41,325 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-05 17:56:41,325 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-05 17:56:41,326 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,326 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,497 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:41,497 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-05 17:56:41,497 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-05 17:56:41,497 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-05 17:56:41,497 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:41,497 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-05 17:56:41,497 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,498 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,498 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-05 17:56:41,498 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,498 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,498 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-06-05 17:56:41,498 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,498 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-05 17:56:41,498 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-05 17:56:41,499 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-05 17:56:41,499 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-05 17:56:41,499 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-05 17:56:41,499 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. 2023-06-05 17:56:41,499 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. started... 2023-06-05 17:56:41,499 INFO [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 3cbe6c99aae1a6ebd336b87dd1388780 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-05 17:56:41,512 INFO [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=13 (bloomFilter=true), to=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/.tmp/info/4a702169c81f451dafd60779e5546924 2023-06-05 17:56:41,519 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/.tmp/info/4a702169c81f451dafd60779e5546924 as hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info/4a702169c81f451dafd60779e5546924 2023-06-05 17:56:41,524 INFO [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info/4a702169c81f451dafd60779e5546924, entries=1, sequenceid=13, filesize=5.8 K 2023-06-05 17:56:41,525 INFO [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 3cbe6c99aae1a6ebd336b87dd1388780 in 26ms, sequenceid=13, compaction requested=true 2023-06-05 17:56:41,526 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 3cbe6c99aae1a6ebd336b87dd1388780: 2023-06-05 17:56:41,526 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. 2023-06-05 17:56:41,526 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-05 17:56:41,526 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-05 17:56:41,526 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:41,526 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-05 17:56:41,526 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,38709,1685987759811' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-05 17:56:41,528 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,528 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:41,528 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:41,528 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-05 17:56:41,528 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-05 17:56:41,528 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,528 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-05 17:56:41,529 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-05 17:56:41,529 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-05 17:56:41,529 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,529 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:41,529 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-05 17:56:41,530 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,38709,1685987759811' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-05 17:56:41,530 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@4361cf59[Count = 0] remaining members to acquire global barrier 2023-06-05 17:56:41,530 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-05 17:56:41,530 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,531 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,531 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,531 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,531 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:41,531 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-05 17:56:41,531 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-05 17:56:41,531 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-05 17:56:41,531 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase20.apache.org,38709,1685987759811' in zk 2023-06-05 17:56:41,532 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-05 17:56:41,532 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:41,532 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-05 17:56:41,533 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:41,533 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-05 17:56:41,533 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-05 17:56:41,533 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-05 17:56:41,533 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-05 17:56:41,534 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-05 17:56:41,534 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,534 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:41,535 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-05 17:56:41,535 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,535 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:41,536 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase20.apache.org,38709,1685987759811': 2023-06-05 17:56:41,536 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,38709,1685987759811' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-05 17:56:41,536 INFO [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-05 17:56:41,536 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-05 17:56:41,536 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-05 17:56:41,536 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,536 INFO [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-05 17:56:41,537 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,537 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,537 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-05 17:56:41,537 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-05 17:56:41,539 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,539 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-05 17:56:41,539 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,540 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,542 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-05 17:56:41,542 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-05 17:56:41,542 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-05 17:56:41,542 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:41,542 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,542 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,543 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-05 17:56:41,543 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,543 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:41,544 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:41,548 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-05 17:56:41,548 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,549 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:41,550 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:41,550 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-05 17:56:41,550 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,551 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-05 17:56:41,551 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-05 17:56:41,551 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-05 17:56:41,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-05 17:56:41,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-05 17:56:41,551 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,551 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:41,551 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,551 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,551 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:41,551 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-05 17:56:41,551 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-05 17:56:41,551 DEBUG [Listener at localhost.localdomain/45889] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-05 17:56:41,552 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-06-05 17:56:41,552 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-05 17:56:41,552 DEBUG [Listener at localhost.localdomain/45889] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-05 17:56:51,552 DEBUG [Listener at localhost.localdomain/45889] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-05 17:56:51,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-05 17:56:51,555 DEBUG [Listener at localhost.localdomain/45889] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-05 17:56:51,566 DEBUG [Listener at localhost.localdomain/45889] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 17769 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-05 17:56:51,566 DEBUG [Listener at localhost.localdomain/45889] regionserver.HStore(1912): 3cbe6c99aae1a6ebd336b87dd1388780/info is initiating minor compaction (all files) 2023-06-05 17:56:51,567 INFO [Listener at localhost.localdomain/45889] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-05 17:56:51,567 INFO [Listener at localhost.localdomain/45889] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-05 17:56:51,567 INFO [Listener at localhost.localdomain/45889] regionserver.HRegion(2259): Starting compaction of 3cbe6c99aae1a6ebd336b87dd1388780/info in TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. 2023-06-05 17:56:51,568 INFO [Listener at localhost.localdomain/45889] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info/ac6e7ae660de47ffb73eab61f31f1f3d, hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info/ad7b69ee38d24ac3aa65d35fc2bd847b, hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info/4a702169c81f451dafd60779e5546924] into tmpdir=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/.tmp, totalSize=17.4 K 2023-06-05 17:56:51,569 DEBUG [Listener at localhost.localdomain/45889] compactions.Compactor(207): Compacting ac6e7ae660de47ffb73eab61f31f1f3d, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=5, earliestPutTs=1685987781071 2023-06-05 17:56:51,570 DEBUG [Listener at localhost.localdomain/45889] compactions.Compactor(207): Compacting ad7b69ee38d24ac3aa65d35fc2bd847b, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=9, earliestPutTs=1685987791147 2023-06-05 17:56:51,571 DEBUG [Listener at localhost.localdomain/45889] compactions.Compactor(207): Compacting 4a702169c81f451dafd60779e5546924, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=13, earliestPutTs=1685987801304 2023-06-05 17:56:51,588 INFO [Listener at localhost.localdomain/45889] throttle.PressureAwareThroughputController(145): 3cbe6c99aae1a6ebd336b87dd1388780#info#compaction#20 average throughput is 3.08 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-05 17:56:51,604 DEBUG [Listener at localhost.localdomain/45889] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/.tmp/info/60439c3f948d444fabfbdbb01c1b85cb as hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info/60439c3f948d444fabfbdbb01c1b85cb 2023-06-05 17:56:51,611 INFO [Listener at localhost.localdomain/45889] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 3cbe6c99aae1a6ebd336b87dd1388780/info of 3cbe6c99aae1a6ebd336b87dd1388780 into 60439c3f948d444fabfbdbb01c1b85cb(size=8.0 K), total size for store is 8.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-05 17:56:51,611 DEBUG [Listener at localhost.localdomain/45889] regionserver.HRegion(2289): Compaction status journal for 3cbe6c99aae1a6ebd336b87dd1388780: 2023-06-05 17:56:51,626 INFO [Listener at localhost.localdomain/45889] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/WALs/jenkins-hbase20.apache.org,38709,1685987759811/jenkins-hbase20.apache.org%2C38709%2C1685987759811.1685987801306 with entries=4, filesize=2.45 KB; new WAL /user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/WALs/jenkins-hbase20.apache.org,38709,1685987759811/jenkins-hbase20.apache.org%2C38709%2C1685987759811.1685987811613 2023-06-05 17:56:51,627 DEBUG [Listener at localhost.localdomain/45889] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35765,DS-089525b5-2642-4891-923e-cbf430a2fb2c,DISK], DatanodeInfoWithStorage[127.0.0.1:44455,DS-501a792a-f7ea-4733-a391-4a461ff891d3,DISK]] 2023-06-05 17:56:51,627 DEBUG [Listener at localhost.localdomain/45889] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/WALs/jenkins-hbase20.apache.org,38709,1685987759811/jenkins-hbase20.apache.org%2C38709%2C1685987759811.1685987801306 is not closed yet, will try archiving it next time 2023-06-05 17:56:51,630 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/WALs/jenkins-hbase20.apache.org,38709,1685987759811/jenkins-hbase20.apache.org%2C38709%2C1685987759811.1685987760201 to hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/oldWALs/jenkins-hbase20.apache.org%2C38709%2C1685987759811.1685987760201 2023-06-05 17:56:51,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-06-05 17:56:51,636 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-06-05 17:56:51,636 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,636 INFO [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-05 17:56:51,636 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-05 17:56:51,637 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-05 17:56:51,637 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-05 17:56:51,637 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,637 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,639 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:51,639 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-05 17:56:51,639 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-05 17:56:51,639 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-05 17:56:51,639 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:51,639 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-05 17:56:51,640 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,640 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,640 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-05 17:56:51,640 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,640 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,640 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-06-05 17:56:51,640 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,640 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-05 17:56:51,640 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-05 17:56:51,641 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-05 17:56:51,641 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-05 17:56:51,641 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-05 17:56:51,641 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. 2023-06-05 17:56:51,641 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. started... 2023-06-05 17:56:51,641 INFO [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 3cbe6c99aae1a6ebd336b87dd1388780 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-05 17:56:51,654 INFO [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=18 (bloomFilter=true), to=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/.tmp/info/c142db1d0ca5452cb7b95b62d50c4d4a 2023-06-05 17:56:51,659 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/.tmp/info/c142db1d0ca5452cb7b95b62d50c4d4a as hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info/c142db1d0ca5452cb7b95b62d50c4d4a 2023-06-05 17:56:51,664 INFO [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info/c142db1d0ca5452cb7b95b62d50c4d4a, entries=1, sequenceid=18, filesize=5.8 K 2023-06-05 17:56:51,665 INFO [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 3cbe6c99aae1a6ebd336b87dd1388780 in 24ms, sequenceid=18, compaction requested=false 2023-06-05 17:56:51,665 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 3cbe6c99aae1a6ebd336b87dd1388780: 2023-06-05 17:56:51,666 DEBUG [rs(jenkins-hbase20.apache.org,38709,1685987759811)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. 2023-06-05 17:56:51,666 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-05 17:56:51,666 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-05 17:56:51,666 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:51,666 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-05 17:56:51,666 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,38709,1685987759811' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-05 17:56:51,668 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:51,668 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,668 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:51,668 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-05 17:56:51,668 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-05 17:56:51,668 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,668 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-05 17:56:51,668 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-05 17:56:51,668 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-05 17:56:51,669 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,669 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:51,669 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-05 17:56:51,669 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,38709,1685987759811' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-05 17:56:51,669 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@423261c0[Count = 0] remaining members to acquire global barrier 2023-06-05 17:56:51,669 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-05 17:56:51,669 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,670 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,670 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,670 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,670 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-05 17:56:51,670 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-05 17:56:51,670 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:51,670 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-05 17:56:51,670 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase20.apache.org,38709,1685987759811' in zk 2023-06-05 17:56:51,672 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:51,672 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-05 17:56:51,672 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:51,672 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-05 17:56:51,672 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-05 17:56:51,672 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-05 17:56:51,672 DEBUG [member: 'jenkins-hbase20.apache.org,38709,1685987759811' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-05 17:56:51,673 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-05 17:56:51,673 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-05 17:56:51,673 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,673 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:51,674 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-05 17:56:51,674 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,674 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:51,675 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase20.apache.org,38709,1685987759811': 2023-06-05 17:56:51,675 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,38709,1685987759811' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-05 17:56:51,675 INFO [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-05 17:56:51,675 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-05 17:56:51,675 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-05 17:56:51,675 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,675 INFO [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-05 17:56:51,676 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,676 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-05 17:56:51,676 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,676 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,676 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,676 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,676 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-05 17:56:51,676 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-05 17:56:51,676 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-05 17:56:51,676 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-05 17:56:51,676 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-05 17:56:51,676 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:51,677 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,677 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,677 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-05 17:56:51,677 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,678 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:51,678 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:51,678 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-05 17:56:51,678 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,679 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:51,681 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:51,681 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-05 17:56:51,681 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,681 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-05 17:56:51,681 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-05 17:56:51,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-05 17:56:51,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-05 17:56:51,681 DEBUG [(jenkins-hbase20.apache.org,39069,1685987759772)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-05 17:56:51,681 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,681 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:56:51,681 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-05 17:56:51,682 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,682 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-05 17:56:51,682 DEBUG [Listener at localhost.localdomain/45889] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-05 17:56:51,682 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,682 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-06-05 17:56:51,682 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-05 17:56:51,682 DEBUG [Listener at localhost.localdomain/45889] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-05 17:56:51,682 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-05 17:57:01,682 DEBUG [Listener at localhost.localdomain/45889] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-05 17:57:01,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39069] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-05 17:57:01,702 INFO [Listener at localhost.localdomain/45889] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/WALs/jenkins-hbase20.apache.org,38709,1685987759811/jenkins-hbase20.apache.org%2C38709%2C1685987759811.1685987811613 with entries=3, filesize=1.97 KB; new WAL /user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/WALs/jenkins-hbase20.apache.org,38709,1685987759811/jenkins-hbase20.apache.org%2C38709%2C1685987759811.1685987821689 2023-06-05 17:57:01,702 DEBUG [Listener at localhost.localdomain/45889] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35765,DS-089525b5-2642-4891-923e-cbf430a2fb2c,DISK], DatanodeInfoWithStorage[127.0.0.1:44455,DS-501a792a-f7ea-4733-a391-4a461ff891d3,DISK]] 2023-06-05 17:57:01,702 DEBUG [Listener at localhost.localdomain/45889] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/WALs/jenkins-hbase20.apache.org,38709,1685987759811/jenkins-hbase20.apache.org%2C38709%2C1685987759811.1685987811613 is not closed yet, will try archiving it next time 2023-06-05 17:57:01,702 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/WALs/jenkins-hbase20.apache.org,38709,1685987759811/jenkins-hbase20.apache.org%2C38709%2C1685987759811.1685987801306 to hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/oldWALs/jenkins-hbase20.apache.org%2C38709%2C1685987759811.1685987801306 2023-06-05 17:57:01,702 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-05 17:57:01,703 INFO [Listener at localhost.localdomain/45889] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-06-05 17:57:01,703 DEBUG [Listener at localhost.localdomain/45889] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x30b7100c to 127.0.0.1:49402 2023-06-05 17:57:01,703 DEBUG [Listener at localhost.localdomain/45889] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:57:01,704 DEBUG [Listener at localhost.localdomain/45889] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-05 17:57:01,705 DEBUG [Listener at localhost.localdomain/45889] util.JVMClusterUtil(257): Found active master hash=1699096969, stopped=false 2023-06-05 17:57:01,705 INFO [Listener at localhost.localdomain/45889] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,39069,1685987759772 2023-06-05 17:57:01,707 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-05 17:57:01,707 INFO [Listener at localhost.localdomain/45889] procedure2.ProcedureExecutor(629): Stopping 2023-06-05 17:57:01,707 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-05 17:57:01,707 DEBUG [Listener at localhost.localdomain/45889] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3d65ba0b to 127.0.0.1:49402 2023-06-05 17:57:01,707 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:57:01,708 DEBUG [Listener at localhost.localdomain/45889] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:57:01,708 INFO [Listener at localhost.localdomain/45889] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,38709,1685987759811' ***** 2023-06-05 17:57:01,709 INFO [Listener at localhost.localdomain/45889] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-05 17:57:01,709 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:57:01,709 INFO [RS:0;jenkins-hbase20:38709] regionserver.HeapMemoryManager(220): Stopping 2023-06-05 17:57:01,709 INFO [RS:0;jenkins-hbase20:38709] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-05 17:57:01,709 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-05 17:57:01,709 INFO [RS:0;jenkins-hbase20:38709] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-05 17:57:01,710 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:57:01,710 INFO [RS:0;jenkins-hbase20:38709] regionserver.HRegionServer(3303): Received CLOSE for 5e46ec87f8619a618947b087a98d5b44 2023-06-05 17:57:01,710 INFO [RS:0;jenkins-hbase20:38709] regionserver.HRegionServer(3303): Received CLOSE for 3cbe6c99aae1a6ebd336b87dd1388780 2023-06-05 17:57:01,710 INFO [RS:0;jenkins-hbase20:38709] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:57:01,710 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 5e46ec87f8619a618947b087a98d5b44, disabling compactions & flushes 2023-06-05 17:57:01,710 DEBUG [RS:0;jenkins-hbase20:38709] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3f7cf002 to 127.0.0.1:49402 2023-06-05 17:57:01,711 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44. 2023-06-05 17:57:01,711 DEBUG [RS:0;jenkins-hbase20:38709] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:57:01,711 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44. 2023-06-05 17:57:01,711 INFO [RS:0;jenkins-hbase20:38709] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-05 17:57:01,711 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44. after waiting 0 ms 2023-06-05 17:57:01,711 INFO [RS:0;jenkins-hbase20:38709] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-05 17:57:01,711 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44. 2023-06-05 17:57:01,711 INFO [RS:0;jenkins-hbase20:38709] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-05 17:57:01,711 INFO [RS:0;jenkins-hbase20:38709] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-05 17:57:01,711 INFO [RS:0;jenkins-hbase20:38709] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-06-05 17:57:01,712 DEBUG [RS:0;jenkins-hbase20:38709] regionserver.HRegionServer(1478): Online Regions={5e46ec87f8619a618947b087a98d5b44=hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44., 1588230740=hbase:meta,,1.1588230740, 3cbe6c99aae1a6ebd336b87dd1388780=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780.} 2023-06-05 17:57:01,712 DEBUG [RS:0;jenkins-hbase20:38709] regionserver.HRegionServer(1504): Waiting on 1588230740, 3cbe6c99aae1a6ebd336b87dd1388780, 5e46ec87f8619a618947b087a98d5b44 2023-06-05 17:57:01,712 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-05 17:57:01,712 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-05 17:57:01,713 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-05 17:57:01,713 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-05 17:57:01,713 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-05 17:57:01,713 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.10 KB heapSize=5.61 KB 2023-06-05 17:57:01,719 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/namespace/5e46ec87f8619a618947b087a98d5b44/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-05 17:57:01,721 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44. 2023-06-05 17:57:01,721 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 5e46ec87f8619a618947b087a98d5b44: 2023-06-05 17:57:01,721 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685987760373.5e46ec87f8619a618947b087a98d5b44. 2023-06-05 17:57:01,722 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 3cbe6c99aae1a6ebd336b87dd1388780, disabling compactions & flushes 2023-06-05 17:57:01,722 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. 2023-06-05 17:57:01,722 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. 2023-06-05 17:57:01,722 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. after waiting 0 ms 2023-06-05 17:57:01,722 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. 2023-06-05 17:57:01,722 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 3cbe6c99aae1a6ebd336b87dd1388780 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-05 17:57:01,742 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.85 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/1588230740/.tmp/info/c3f4fc60ca394b76b956ba664f8f3c55 2023-06-05 17:57:01,770 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=264 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/1588230740/.tmp/table/b9379981358b40c98b2f59a25cf79c2b 2023-06-05 17:57:01,776 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/1588230740/.tmp/info/c3f4fc60ca394b76b956ba664f8f3c55 as hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/1588230740/info/c3f4fc60ca394b76b956ba664f8f3c55 2023-06-05 17:57:01,781 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/1588230740/info/c3f4fc60ca394b76b956ba664f8f3c55, entries=20, sequenceid=14, filesize=7.6 K 2023-06-05 17:57:01,782 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/1588230740/.tmp/table/b9379981358b40c98b2f59a25cf79c2b as hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/1588230740/table/b9379981358b40c98b2f59a25cf79c2b 2023-06-05 17:57:01,789 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/1588230740/table/b9379981358b40c98b2f59a25cf79c2b, entries=4, sequenceid=14, filesize=4.9 K 2023-06-05 17:57:01,790 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.10 KB/3178, heapSize ~5.33 KB/5456, currentSize=0 B/0 for 1588230740 in 77ms, sequenceid=14, compaction requested=false 2023-06-05 17:57:01,797 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-06-05 17:57:01,797 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-05 17:57:01,798 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-05 17:57:01,798 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-05 17:57:01,798 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-05 17:57:01,912 DEBUG [RS:0;jenkins-hbase20:38709] regionserver.HRegionServer(1504): Waiting on 3cbe6c99aae1a6ebd336b87dd1388780 2023-06-05 17:57:02,073 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-06-05 17:57:02,073 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-06-05 17:57:02,076 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-05 17:57:02,112 DEBUG [RS:0;jenkins-hbase20:38709] regionserver.HRegionServer(1504): Waiting on 3cbe6c99aae1a6ebd336b87dd1388780 2023-06-05 17:57:02,163 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=22 (bloomFilter=true), to=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/.tmp/info/0ace8def2b214b62884d71073ff3283c 2023-06-05 17:57:02,180 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/.tmp/info/0ace8def2b214b62884d71073ff3283c as hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info/0ace8def2b214b62884d71073ff3283c 2023-06-05 17:57:02,187 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info/0ace8def2b214b62884d71073ff3283c, entries=1, sequenceid=22, filesize=5.8 K 2023-06-05 17:57:02,188 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 3cbe6c99aae1a6ebd336b87dd1388780 in 466ms, sequenceid=22, compaction requested=true 2023-06-05 17:57:02,191 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info/ac6e7ae660de47ffb73eab61f31f1f3d, hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info/ad7b69ee38d24ac3aa65d35fc2bd847b, hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info/4a702169c81f451dafd60779e5546924] to archive 2023-06-05 17:57:02,192 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-05 17:57:02,195 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info/ac6e7ae660de47ffb73eab61f31f1f3d to hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info/ac6e7ae660de47ffb73eab61f31f1f3d 2023-06-05 17:57:02,196 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info/ad7b69ee38d24ac3aa65d35fc2bd847b to hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info/ad7b69ee38d24ac3aa65d35fc2bd847b 2023-06-05 17:57:02,197 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info/4a702169c81f451dafd60779e5546924 to hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/info/4a702169c81f451dafd60779e5546924 2023-06-05 17:57:02,204 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/3cbe6c99aae1a6ebd336b87dd1388780/recovered.edits/25.seqid, newMaxSeqId=25, maxSeqId=1 2023-06-05 17:57:02,205 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. 2023-06-05 17:57:02,205 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 3cbe6c99aae1a6ebd336b87dd1388780: 2023-06-05 17:57:02,205 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685987760954.3cbe6c99aae1a6ebd336b87dd1388780. 2023-06-05 17:57:02,313 INFO [RS:0;jenkins-hbase20:38709] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,38709,1685987759811; all regions closed. 2023-06-05 17:57:02,314 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/WALs/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:57:02,326 DEBUG [RS:0;jenkins-hbase20:38709] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/oldWALs 2023-06-05 17:57:02,326 INFO [RS:0;jenkins-hbase20:38709] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C38709%2C1685987759811.meta:.meta(num 1685987760313) 2023-06-05 17:57:02,326 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/WALs/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:57:02,333 DEBUG [RS:0;jenkins-hbase20:38709] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/oldWALs 2023-06-05 17:57:02,333 INFO [RS:0;jenkins-hbase20:38709] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C38709%2C1685987759811:(num 1685987821689) 2023-06-05 17:57:02,333 DEBUG [RS:0;jenkins-hbase20:38709] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:57:02,333 INFO [RS:0;jenkins-hbase20:38709] regionserver.LeaseManager(133): Closed leases 2023-06-05 17:57:02,333 INFO [RS:0;jenkins-hbase20:38709] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-06-05 17:57:02,333 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-05 17:57:02,334 INFO [RS:0;jenkins-hbase20:38709] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:38709 2023-06-05 17:57:02,337 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38709,1685987759811 2023-06-05 17:57:02,337 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-05 17:57:02,337 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-05 17:57:02,337 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,38709,1685987759811] 2023-06-05 17:57:02,337 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,38709,1685987759811; numProcessing=1 2023-06-05 17:57:02,338 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,38709,1685987759811 already deleted, retry=false 2023-06-05 17:57:02,338 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,38709,1685987759811 expired; onlineServers=0 2023-06-05 17:57:02,338 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,39069,1685987759772' ***** 2023-06-05 17:57:02,338 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-05 17:57:02,339 DEBUG [M:0;jenkins-hbase20:39069] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6179a1f9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-05 17:57:02,339 INFO [M:0;jenkins-hbase20:39069] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,39069,1685987759772 2023-06-05 17:57:02,339 INFO [M:0;jenkins-hbase20:39069] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,39069,1685987759772; all regions closed. 2023-06-05 17:57:02,339 DEBUG [M:0;jenkins-hbase20:39069] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:57:02,339 DEBUG [M:0;jenkins-hbase20:39069] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-05 17:57:02,339 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-05 17:57:02,339 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685987759949] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685987759949,5,FailOnTimeoutGroup] 2023-06-05 17:57:02,339 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685987759949] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685987759949,5,FailOnTimeoutGroup] 2023-06-05 17:57:02,339 DEBUG [M:0;jenkins-hbase20:39069] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-05 17:57:02,341 INFO [M:0;jenkins-hbase20:39069] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-05 17:57:02,341 INFO [M:0;jenkins-hbase20:39069] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-05 17:57:02,341 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-05 17:57:02,341 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:57:02,341 INFO [M:0;jenkins-hbase20:39069] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-06-05 17:57:02,341 DEBUG [M:0;jenkins-hbase20:39069] master.HMaster(1512): Stopping service threads 2023-06-05 17:57:02,341 INFO [M:0;jenkins-hbase20:39069] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-05 17:57:02,342 ERROR [M:0;jenkins-hbase20:39069] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-05 17:57:02,342 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-05 17:57:02,342 INFO [M:0;jenkins-hbase20:39069] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-05 17:57:02,342 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-05 17:57:02,342 DEBUG [M:0;jenkins-hbase20:39069] zookeeper.ZKUtil(398): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-05 17:57:02,342 WARN [M:0;jenkins-hbase20:39069] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-05 17:57:02,342 INFO [M:0;jenkins-hbase20:39069] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-05 17:57:02,343 INFO [M:0;jenkins-hbase20:39069] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-05 17:57:02,343 DEBUG [M:0;jenkins-hbase20:39069] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-05 17:57:02,343 INFO [M:0;jenkins-hbase20:39069] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:57:02,343 DEBUG [M:0;jenkins-hbase20:39069] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:57:02,343 DEBUG [M:0;jenkins-hbase20:39069] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-05 17:57:02,343 DEBUG [M:0;jenkins-hbase20:39069] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:57:02,343 INFO [M:0;jenkins-hbase20:39069] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.93 KB heapSize=47.38 KB 2023-06-05 17:57:02,354 INFO [M:0;jenkins-hbase20:39069] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.93 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/ca41d846764042ed97914c5df45ea9a6 2023-06-05 17:57:02,358 INFO [M:0;jenkins-hbase20:39069] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ca41d846764042ed97914c5df45ea9a6 2023-06-05 17:57:02,359 DEBUG [M:0;jenkins-hbase20:39069] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/ca41d846764042ed97914c5df45ea9a6 as hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/ca41d846764042ed97914c5df45ea9a6 2023-06-05 17:57:02,364 INFO [M:0;jenkins-hbase20:39069] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ca41d846764042ed97914c5df45ea9a6 2023-06-05 17:57:02,364 INFO [M:0;jenkins-hbase20:39069] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38221/user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/ca41d846764042ed97914c5df45ea9a6, entries=11, sequenceid=100, filesize=6.1 K 2023-06-05 17:57:02,365 INFO [M:0;jenkins-hbase20:39069] regionserver.HRegion(2948): Finished flush of dataSize ~38.93 KB/39866, heapSize ~47.36 KB/48496, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 22ms, sequenceid=100, compaction requested=false 2023-06-05 17:57:02,366 INFO [M:0;jenkins-hbase20:39069] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:57:02,366 DEBUG [M:0;jenkins-hbase20:39069] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-05 17:57:02,366 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/fcf38186-4bb8-79a5-1a20-2daefa236338/MasterData/WALs/jenkins-hbase20.apache.org,39069,1685987759772 2023-06-05 17:57:02,369 INFO [M:0;jenkins-hbase20:39069] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-05 17:57:02,369 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-05 17:57:02,370 INFO [M:0;jenkins-hbase20:39069] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:39069 2023-06-05 17:57:02,371 DEBUG [M:0;jenkins-hbase20:39069] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,39069,1685987759772 already deleted, retry=false 2023-06-05 17:57:02,438 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:57:02,438 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): regionserver:38709-0x101bc69aeb10001, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:57:02,438 INFO [RS:0;jenkins-hbase20:38709] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,38709,1685987759811; zookeeper connection closed. 2023-06-05 17:57:02,439 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@549785f3] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@549785f3 2023-06-05 17:57:02,439 INFO [Listener at localhost.localdomain/45889] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-05 17:57:02,539 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:57:02,539 DEBUG [Listener at localhost.localdomain/45889-EventThread] zookeeper.ZKWatcher(600): master:39069-0x101bc69aeb10000, quorum=127.0.0.1:49402, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:57:02,539 INFO [M:0;jenkins-hbase20:39069] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,39069,1685987759772; zookeeper connection closed. 2023-06-05 17:57:02,540 WARN [Listener at localhost.localdomain/45889] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-05 17:57:02,552 INFO [Listener at localhost.localdomain/45889] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-05 17:57:02,659 WARN [BP-2025876527-148.251.75.209-1685987759018 heartbeating to localhost.localdomain/127.0.0.1:38221] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-05 17:57:02,659 WARN [BP-2025876527-148.251.75.209-1685987759018 heartbeating to localhost.localdomain/127.0.0.1:38221] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2025876527-148.251.75.209-1685987759018 (Datanode Uuid 177fcb07-d942-4d3e-900d-b296ed159ac6) service to localhost.localdomain/127.0.0.1:38221 2023-06-05 17:57:02,660 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/cluster_5c6fa081-b821-ddf8-f727-131e46dca619/dfs/data/data3/current/BP-2025876527-148.251.75.209-1685987759018] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:57:02,660 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/cluster_5c6fa081-b821-ddf8-f727-131e46dca619/dfs/data/data4/current/BP-2025876527-148.251.75.209-1685987759018] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:57:02,662 WARN [Listener at localhost.localdomain/45889] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-05 17:57:02,667 INFO [Listener at localhost.localdomain/45889] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-05 17:57:02,776 WARN [BP-2025876527-148.251.75.209-1685987759018 heartbeating to localhost.localdomain/127.0.0.1:38221] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-05 17:57:02,776 WARN [BP-2025876527-148.251.75.209-1685987759018 heartbeating to localhost.localdomain/127.0.0.1:38221] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2025876527-148.251.75.209-1685987759018 (Datanode Uuid be7740f5-cd17-447a-9878-68cce00ab3aa) service to localhost.localdomain/127.0.0.1:38221 2023-06-05 17:57:02,777 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/cluster_5c6fa081-b821-ddf8-f727-131e46dca619/dfs/data/data1/current/BP-2025876527-148.251.75.209-1685987759018] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:57:02,778 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/cluster_5c6fa081-b821-ddf8-f727-131e46dca619/dfs/data/data2/current/BP-2025876527-148.251.75.209-1685987759018] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:57:02,791 INFO [Listener at localhost.localdomain/45889] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-06-05 17:57:02,905 INFO [Listener at localhost.localdomain/45889] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-05 17:57:02,923 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-05 17:57:02,932 INFO [Listener at localhost.localdomain/45889] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=95 (was 88) - Thread LEAK? -, OpenFileDescriptor=498 (was 463) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=95 (was 148), ProcessCount=167 (was 169), AvailableMemoryMB=6787 (was 6383) - AvailableMemoryMB LEAK? - 2023-06-05 17:57:02,939 INFO [Listener at localhost.localdomain/45889] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRolling Thread=96, OpenFileDescriptor=498, MaxFileDescriptor=60000, SystemLoadAverage=95, ProcessCount=167, AvailableMemoryMB=6787 2023-06-05 17:57:02,939 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-05 17:57:02,940 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/hadoop.log.dir so I do NOT create it in target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c 2023-06-05 17:57:02,940 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5324790-5478-06e1-a8ed-68af4f0dddb9/hadoop.tmp.dir so I do NOT create it in target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c 2023-06-05 17:57:02,940 INFO [Listener at localhost.localdomain/45889] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/cluster_766340aa-64db-8fb6-cbe2-12d9de74b5da, deleteOnExit=true 2023-06-05 17:57:02,940 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-05 17:57:02,940 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/test.cache.data in system properties and HBase conf 2023-06-05 17:57:02,940 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/hadoop.tmp.dir in system properties and HBase conf 2023-06-05 17:57:02,940 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/hadoop.log.dir in system properties and HBase conf 2023-06-05 17:57:02,940 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-05 17:57:02,941 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-05 17:57:02,941 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-05 17:57:02,941 DEBUG [Listener at localhost.localdomain/45889] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-05 17:57:02,941 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-05 17:57:02,941 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-05 17:57:02,941 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-05 17:57:02,941 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-05 17:57:02,941 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-05 17:57:02,942 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-05 17:57:02,942 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-05 17:57:02,942 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-05 17:57:02,942 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-05 17:57:02,942 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/nfs.dump.dir in system properties and HBase conf 2023-06-05 17:57:02,942 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/java.io.tmpdir in system properties and HBase conf 2023-06-05 17:57:02,942 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-05 17:57:02,942 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-05 17:57:02,942 INFO [Listener at localhost.localdomain/45889] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-05 17:57:02,944 WARN [Listener at localhost.localdomain/45889] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-05 17:57:02,945 WARN [Listener at localhost.localdomain/45889] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-05 17:57:02,945 WARN [Listener at localhost.localdomain/45889] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-05 17:57:02,969 WARN [Listener at localhost.localdomain/45889] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:57:02,972 INFO [Listener at localhost.localdomain/45889] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:57:02,978 INFO [Listener at localhost.localdomain/45889] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/java.io.tmpdir/Jetty_localhost_localdomain_44463_hdfs____.vaiaph/webapp 2023-06-05 17:57:03,052 INFO [Listener at localhost.localdomain/45889] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:44463 2023-06-05 17:57:03,053 WARN [Listener at localhost.localdomain/45889] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-05 17:57:03,055 WARN [Listener at localhost.localdomain/45889] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-05 17:57:03,055 WARN [Listener at localhost.localdomain/45889] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-05 17:57:03,081 WARN [Listener at localhost.localdomain/43409] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:57:03,088 WARN [Listener at localhost.localdomain/43409] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-05 17:57:03,091 WARN [Listener at localhost.localdomain/43409] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:57:03,091 INFO [Listener at localhost.localdomain/43409] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:57:03,096 INFO [Listener at localhost.localdomain/43409] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/java.io.tmpdir/Jetty_localhost_43399_datanode____epsyvv/webapp 2023-06-05 17:57:03,171 INFO [Listener at localhost.localdomain/43409] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43399 2023-06-05 17:57:03,176 WARN [Listener at localhost.localdomain/41319] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:57:03,185 WARN [Listener at localhost.localdomain/41319] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-05 17:57:03,188 WARN [Listener at localhost.localdomain/41319] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:57:03,189 INFO [Listener at localhost.localdomain/41319] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:57:03,193 INFO [Listener at localhost.localdomain/41319] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/java.io.tmpdir/Jetty_localhost_38107_datanode____.eshpnq/webapp 2023-06-05 17:57:03,306 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x20d635570c3b1c1d: Processing first storage report for DS-e636ed62-dc1c-467a-aec2-951d91a5fcb2 from datanode 0e1f1bcf-4401-4a5b-9ca1-be7168946673 2023-06-05 17:57:03,306 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x20d635570c3b1c1d: from storage DS-e636ed62-dc1c-467a-aec2-951d91a5fcb2 node DatanodeRegistration(127.0.0.1:45559, datanodeUuid=0e1f1bcf-4401-4a5b-9ca1-be7168946673, infoPort=46875, infoSecurePort=0, ipcPort=41319, storageInfo=lv=-57;cid=testClusterID;nsid=162667960;c=1685987822947), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:57:03,306 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x20d635570c3b1c1d: Processing first storage report for DS-1484c63e-c460-43d1-b32c-52c0aae17301 from datanode 0e1f1bcf-4401-4a5b-9ca1-be7168946673 2023-06-05 17:57:03,306 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x20d635570c3b1c1d: from storage DS-1484c63e-c460-43d1-b32c-52c0aae17301 node DatanodeRegistration(127.0.0.1:45559, datanodeUuid=0e1f1bcf-4401-4a5b-9ca1-be7168946673, infoPort=46875, infoSecurePort=0, ipcPort=41319, storageInfo=lv=-57;cid=testClusterID;nsid=162667960;c=1685987822947), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:57:03,330 INFO [Listener at localhost.localdomain/41319] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38107 2023-06-05 17:57:03,337 WARN [Listener at localhost.localdomain/37565] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:57:03,419 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf5c6abb65d98064f: Processing first storage report for DS-28044e8d-fcd6-452c-99ab-f3a807e211b4 from datanode 640cdc3f-58c8-4b88-9c8d-b7f9cc025907 2023-06-05 17:57:03,419 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf5c6abb65d98064f: from storage DS-28044e8d-fcd6-452c-99ab-f3a807e211b4 node DatanodeRegistration(127.0.0.1:36145, datanodeUuid=640cdc3f-58c8-4b88-9c8d-b7f9cc025907, infoPort=41995, infoSecurePort=0, ipcPort=37565, storageInfo=lv=-57;cid=testClusterID;nsid=162667960;c=1685987822947), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:57:03,420 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf5c6abb65d98064f: Processing first storage report for DS-2ef85a08-d525-40f8-9f08-e1b6b43234a4 from datanode 640cdc3f-58c8-4b88-9c8d-b7f9cc025907 2023-06-05 17:57:03,420 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf5c6abb65d98064f: from storage DS-2ef85a08-d525-40f8-9f08-e1b6b43234a4 node DatanodeRegistration(127.0.0.1:36145, datanodeUuid=640cdc3f-58c8-4b88-9c8d-b7f9cc025907, infoPort=41995, infoSecurePort=0, ipcPort=37565, storageInfo=lv=-57;cid=testClusterID;nsid=162667960;c=1685987822947), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:57:03,445 DEBUG [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c 2023-06-05 17:57:03,447 INFO [Listener at localhost.localdomain/37565] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/cluster_766340aa-64db-8fb6-cbe2-12d9de74b5da/zookeeper_0, clientPort=57589, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/cluster_766340aa-64db-8fb6-cbe2-12d9de74b5da/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/cluster_766340aa-64db-8fb6-cbe2-12d9de74b5da/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-05 17:57:03,448 INFO [Listener at localhost.localdomain/37565] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=57589 2023-06-05 17:57:03,449 INFO [Listener at localhost.localdomain/37565] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:57:03,450 INFO [Listener at localhost.localdomain/37565] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:57:03,467 INFO [Listener at localhost.localdomain/37565] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e with version=8 2023-06-05 17:57:03,468 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/hbase-staging 2023-06-05 17:57:03,470 INFO [Listener at localhost.localdomain/37565] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-06-05 17:57:03,470 INFO [Listener at localhost.localdomain/37565] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:57:03,470 INFO [Listener at localhost.localdomain/37565] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-05 17:57:03,470 INFO [Listener at localhost.localdomain/37565] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-05 17:57:03,470 INFO [Listener at localhost.localdomain/37565] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:57:03,470 INFO [Listener at localhost.localdomain/37565] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-05 17:57:03,470 INFO [Listener at localhost.localdomain/37565] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-05 17:57:03,472 INFO [Listener at localhost.localdomain/37565] ipc.NettyRpcServer(120): Bind to /148.251.75.209:36181 2023-06-05 17:57:03,472 INFO [Listener at localhost.localdomain/37565] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:57:03,473 INFO [Listener at localhost.localdomain/37565] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:57:03,474 INFO [Listener at localhost.localdomain/37565] zookeeper.RecoverableZooKeeper(93): Process identifier=master:36181 connecting to ZooKeeper ensemble=127.0.0.1:57589 2023-06-05 17:57:03,479 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:361810x0, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-05 17:57:03,480 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:36181-0x101bc6aa7820000 connected 2023-06-05 17:57:03,495 DEBUG [Listener at localhost.localdomain/37565] zookeeper.ZKUtil(164): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-05 17:57:03,496 DEBUG [Listener at localhost.localdomain/37565] zookeeper.ZKUtil(164): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:57:03,496 DEBUG [Listener at localhost.localdomain/37565] zookeeper.ZKUtil(164): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-05 17:57:03,497 DEBUG [Listener at localhost.localdomain/37565] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36181 2023-06-05 17:57:03,497 DEBUG [Listener at localhost.localdomain/37565] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36181 2023-06-05 17:57:03,498 DEBUG [Listener at localhost.localdomain/37565] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36181 2023-06-05 17:57:03,498 DEBUG [Listener at localhost.localdomain/37565] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36181 2023-06-05 17:57:03,499 DEBUG [Listener at localhost.localdomain/37565] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36181 2023-06-05 17:57:03,499 INFO [Listener at localhost.localdomain/37565] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e, hbase.cluster.distributed=false 2023-06-05 17:57:03,518 INFO [Listener at localhost.localdomain/37565] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-06-05 17:57:03,518 INFO [Listener at localhost.localdomain/37565] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:57:03,518 INFO [Listener at localhost.localdomain/37565] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-05 17:57:03,518 INFO [Listener at localhost.localdomain/37565] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-05 17:57:03,518 INFO [Listener at localhost.localdomain/37565] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:57:03,518 INFO [Listener at localhost.localdomain/37565] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-05 17:57:03,519 INFO [Listener at localhost.localdomain/37565] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-05 17:57:03,520 INFO [Listener at localhost.localdomain/37565] ipc.NettyRpcServer(120): Bind to /148.251.75.209:39611 2023-06-05 17:57:03,520 INFO [Listener at localhost.localdomain/37565] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-05 17:57:03,523 DEBUG [Listener at localhost.localdomain/37565] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-05 17:57:03,523 INFO [Listener at localhost.localdomain/37565] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:57:03,524 INFO [Listener at localhost.localdomain/37565] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:57:03,525 INFO [Listener at localhost.localdomain/37565] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39611 connecting to ZooKeeper ensemble=127.0.0.1:57589 2023-06-05 17:57:03,527 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): regionserver:396110x0, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-05 17:57:03,528 DEBUG [Listener at localhost.localdomain/37565] zookeeper.ZKUtil(164): regionserver:396110x0, quorum=127.0.0.1:57589, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-05 17:57:03,529 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39611-0x101bc6aa7820001 connected 2023-06-05 17:57:03,529 DEBUG [Listener at localhost.localdomain/37565] zookeeper.ZKUtil(164): regionserver:39611-0x101bc6aa7820001, quorum=127.0.0.1:57589, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:57:03,530 DEBUG [Listener at localhost.localdomain/37565] zookeeper.ZKUtil(164): regionserver:39611-0x101bc6aa7820001, quorum=127.0.0.1:57589, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-05 17:57:03,530 DEBUG [Listener at localhost.localdomain/37565] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39611 2023-06-05 17:57:03,530 DEBUG [Listener at localhost.localdomain/37565] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39611 2023-06-05 17:57:03,530 DEBUG [Listener at localhost.localdomain/37565] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39611 2023-06-05 17:57:03,531 DEBUG [Listener at localhost.localdomain/37565] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39611 2023-06-05 17:57:03,531 DEBUG [Listener at localhost.localdomain/37565] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39611 2023-06-05 17:57:03,532 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,36181,1685987823469 2023-06-05 17:57:03,533 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-05 17:57:03,533 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,36181,1685987823469 2023-06-05 17:57:03,534 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-05 17:57:03,534 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): regionserver:39611-0x101bc6aa7820001, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-05 17:57:03,534 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:57:03,534 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-05 17:57:03,535 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,36181,1685987823469 from backup master directory 2023-06-05 17:57:03,535 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-05 17:57:03,536 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,36181,1685987823469 2023-06-05 17:57:03,536 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-05 17:57:03,536 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-05 17:57:03,536 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,36181,1685987823469 2023-06-05 17:57:03,549 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/hbase.id with ID: 5e2c6524-28b5-4282-b627-c33757f88d48 2023-06-05 17:57:03,562 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:57:03,564 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:57:03,572 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x01e9be4a to 127.0.0.1:57589 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-05 17:57:03,575 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@144e5afd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-05 17:57:03,576 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-05 17:57:03,576 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-05 17:57:03,577 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-05 17:57:03,578 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/MasterData/data/master/store-tmp 2023-06-05 17:57:03,586 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:57:03,587 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-05 17:57:03,587 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:57:03,587 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:57:03,587 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-05 17:57:03,587 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:57:03,587 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:57:03,587 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-05 17:57:03,588 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/MasterData/WALs/jenkins-hbase20.apache.org,36181,1685987823469 2023-06-05 17:57:03,591 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C36181%2C1685987823469, suffix=, logDir=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/MasterData/WALs/jenkins-hbase20.apache.org,36181,1685987823469, archiveDir=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/MasterData/oldWALs, maxLogs=10 2023-06-05 17:57:03,598 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/MasterData/WALs/jenkins-hbase20.apache.org,36181,1685987823469/jenkins-hbase20.apache.org%2C36181%2C1685987823469.1685987823591 2023-06-05 17:57:03,598 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45559,DS-e636ed62-dc1c-467a-aec2-951d91a5fcb2,DISK], DatanodeInfoWithStorage[127.0.0.1:36145,DS-28044e8d-fcd6-452c-99ab-f3a807e211b4,DISK]] 2023-06-05 17:57:03,598 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-05 17:57:03,598 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:57:03,598 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:57:03,598 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:57:03,600 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:57:03,601 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-05 17:57:03,602 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-05 17:57:03,602 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:57:03,603 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:57:03,604 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:57:03,606 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:57:03,610 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-05 17:57:03,611 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=848986, jitterRate=0.07954233884811401}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-05 17:57:03,611 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-05 17:57:03,611 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-05 17:57:03,612 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-05 17:57:03,613 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-05 17:57:03,613 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-05 17:57:03,613 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-05 17:57:03,613 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-05 17:57:03,613 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-05 17:57:03,614 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-05 17:57:03,615 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-05 17:57:03,624 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-05 17:57:03,624 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-05 17:57:03,624 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-05 17:57:03,624 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-05 17:57:03,625 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-05 17:57:03,626 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:57:03,627 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-05 17:57:03,627 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-05 17:57:03,628 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-05 17:57:03,628 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-05 17:57:03,628 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): regionserver:39611-0x101bc6aa7820001, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-05 17:57:03,628 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:57:03,629 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,36181,1685987823469, sessionid=0x101bc6aa7820000, setting cluster-up flag (Was=false) 2023-06-05 17:57:03,632 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:57:03,634 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-05 17:57:03,634 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,36181,1685987823469 2023-06-05 17:57:03,636 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:57:03,639 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-05 17:57:03,640 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,36181,1685987823469 2023-06-05 17:57:03,640 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/.hbase-snapshot/.tmp 2023-06-05 17:57:03,642 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-05 17:57:03,643 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-05 17:57:03,643 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-05 17:57:03,643 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-05 17:57:03,643 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-05 17:57:03,643 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-06-05 17:57:03,643 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:57:03,643 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-05 17:57:03,643 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:57:03,645 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685987853645 2023-06-05 17:57:03,646 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-05 17:57:03,646 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-05 17:57:03,646 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-05 17:57:03,646 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-05 17:57:03,646 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-05 17:57:03,646 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-05 17:57:03,646 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-05 17:57:03,647 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-05 17:57:03,647 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-05 17:57:03,647 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-05 17:57:03,647 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-05 17:57:03,647 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-05 17:57:03,647 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-05 17:57:03,647 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-05 17:57:03,647 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685987823647,5,FailOnTimeoutGroup] 2023-06-05 17:57:03,648 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685987823647,5,FailOnTimeoutGroup] 2023-06-05 17:57:03,648 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-05 17:57:03,648 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-05 17:57:03,648 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-05 17:57:03,648 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-05 17:57:03,648 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-05 17:57:03,658 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-05 17:57:03,659 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-05 17:57:03,659 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e 2023-06-05 17:57:03,666 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:57:03,667 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-05 17:57:03,668 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740/info 2023-06-05 17:57:03,668 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-05 17:57:03,669 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:57:03,669 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-05 17:57:03,670 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740/rep_barrier 2023-06-05 17:57:03,670 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-05 17:57:03,671 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:57:03,671 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-05 17:57:03,672 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740/table 2023-06-05 17:57:03,672 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-05 17:57:03,673 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:57:03,674 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740 2023-06-05 17:57:03,674 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740 2023-06-05 17:57:03,676 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-05 17:57:03,678 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-05 17:57:03,680 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-05 17:57:03,681 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=713991, jitterRate=-0.09211356937885284}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-05 17:57:03,681 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-05 17:57:03,681 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-05 17:57:03,681 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-05 17:57:03,682 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-05 17:57:03,682 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-05 17:57:03,682 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-05 17:57:03,682 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-05 17:57:03,682 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-05 17:57:03,683 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-05 17:57:03,683 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-05 17:57:03,683 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-05 17:57:03,685 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-05 17:57:03,686 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-05 17:57:03,734 INFO [RS:0;jenkins-hbase20:39611] regionserver.HRegionServer(951): ClusterId : 5e2c6524-28b5-4282-b627-c33757f88d48 2023-06-05 17:57:03,735 DEBUG [RS:0;jenkins-hbase20:39611] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-05 17:57:03,738 DEBUG [RS:0;jenkins-hbase20:39611] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-05 17:57:03,738 DEBUG [RS:0;jenkins-hbase20:39611] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-05 17:57:03,742 DEBUG [RS:0;jenkins-hbase20:39611] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-05 17:57:03,743 DEBUG [RS:0;jenkins-hbase20:39611] zookeeper.ReadOnlyZKClient(139): Connect 0x6c5a9742 to 127.0.0.1:57589 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-05 17:57:03,748 DEBUG [RS:0;jenkins-hbase20:39611] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@629d64b9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-05 17:57:03,748 DEBUG [RS:0;jenkins-hbase20:39611] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@66a2fc32, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-05 17:57:03,758 DEBUG [RS:0;jenkins-hbase20:39611] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:39611 2023-06-05 17:57:03,759 INFO [RS:0;jenkins-hbase20:39611] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-05 17:57:03,759 INFO [RS:0;jenkins-hbase20:39611] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-05 17:57:03,759 DEBUG [RS:0;jenkins-hbase20:39611] regionserver.HRegionServer(1022): About to register with Master. 2023-06-05 17:57:03,759 INFO [RS:0;jenkins-hbase20:39611] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,36181,1685987823469 with isa=jenkins-hbase20.apache.org/148.251.75.209:39611, startcode=1685987823517 2023-06-05 17:57:03,760 DEBUG [RS:0;jenkins-hbase20:39611] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-05 17:57:03,764 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:53937, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-06-05 17:57:03,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36181] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:57:03,766 DEBUG [RS:0;jenkins-hbase20:39611] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e 2023-06-05 17:57:03,766 DEBUG [RS:0;jenkins-hbase20:39611] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:43409 2023-06-05 17:57:03,766 DEBUG [RS:0;jenkins-hbase20:39611] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-05 17:57:03,767 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-05 17:57:03,768 DEBUG [RS:0;jenkins-hbase20:39611] zookeeper.ZKUtil(162): regionserver:39611-0x101bc6aa7820001, quorum=127.0.0.1:57589, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:57:03,768 WARN [RS:0;jenkins-hbase20:39611] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-05 17:57:03,768 INFO [RS:0;jenkins-hbase20:39611] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-05 17:57:03,768 DEBUG [RS:0;jenkins-hbase20:39611] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/WALs/jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:57:03,768 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,39611,1685987823517] 2023-06-05 17:57:03,772 DEBUG [RS:0;jenkins-hbase20:39611] zookeeper.ZKUtil(162): regionserver:39611-0x101bc6aa7820001, quorum=127.0.0.1:57589, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:57:03,773 DEBUG [RS:0;jenkins-hbase20:39611] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-05 17:57:03,773 INFO [RS:0;jenkins-hbase20:39611] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-05 17:57:03,776 INFO [RS:0;jenkins-hbase20:39611] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-05 17:57:03,777 INFO [RS:0;jenkins-hbase20:39611] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-05 17:57:03,777 INFO [RS:0;jenkins-hbase20:39611] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-05 17:57:03,777 INFO [RS:0;jenkins-hbase20:39611] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-05 17:57:03,778 INFO [RS:0;jenkins-hbase20:39611] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-05 17:57:03,778 DEBUG [RS:0;jenkins-hbase20:39611] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:57:03,778 DEBUG [RS:0;jenkins-hbase20:39611] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:57:03,779 DEBUG [RS:0;jenkins-hbase20:39611] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:57:03,779 DEBUG [RS:0;jenkins-hbase20:39611] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:57:03,779 DEBUG [RS:0;jenkins-hbase20:39611] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:57:03,779 DEBUG [RS:0;jenkins-hbase20:39611] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-05 17:57:03,779 DEBUG [RS:0;jenkins-hbase20:39611] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:57:03,779 DEBUG [RS:0;jenkins-hbase20:39611] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:57:03,779 DEBUG [RS:0;jenkins-hbase20:39611] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:57:03,779 DEBUG [RS:0;jenkins-hbase20:39611] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:57:03,779 INFO [RS:0;jenkins-hbase20:39611] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-05 17:57:03,780 INFO [RS:0;jenkins-hbase20:39611] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-05 17:57:03,780 INFO [RS:0;jenkins-hbase20:39611] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-05 17:57:03,791 INFO [RS:0;jenkins-hbase20:39611] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-05 17:57:03,791 INFO [RS:0;jenkins-hbase20:39611] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39611,1685987823517-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-05 17:57:03,799 INFO [RS:0;jenkins-hbase20:39611] regionserver.Replication(203): jenkins-hbase20.apache.org,39611,1685987823517 started 2023-06-05 17:57:03,800 INFO [RS:0;jenkins-hbase20:39611] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,39611,1685987823517, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:39611, sessionid=0x101bc6aa7820001 2023-06-05 17:57:03,800 DEBUG [RS:0;jenkins-hbase20:39611] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-05 17:57:03,800 DEBUG [RS:0;jenkins-hbase20:39611] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:57:03,800 DEBUG [RS:0;jenkins-hbase20:39611] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,39611,1685987823517' 2023-06-05 17:57:03,800 DEBUG [RS:0;jenkins-hbase20:39611] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-05 17:57:03,800 DEBUG [RS:0;jenkins-hbase20:39611] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-05 17:57:03,801 DEBUG [RS:0;jenkins-hbase20:39611] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-05 17:57:03,801 DEBUG [RS:0;jenkins-hbase20:39611] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-05 17:57:03,801 DEBUG [RS:0;jenkins-hbase20:39611] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:57:03,801 DEBUG [RS:0;jenkins-hbase20:39611] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,39611,1685987823517' 2023-06-05 17:57:03,801 DEBUG [RS:0;jenkins-hbase20:39611] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-05 17:57:03,801 DEBUG [RS:0;jenkins-hbase20:39611] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-05 17:57:03,801 DEBUG [RS:0;jenkins-hbase20:39611] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-05 17:57:03,801 INFO [RS:0;jenkins-hbase20:39611] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-05 17:57:03,801 INFO [RS:0;jenkins-hbase20:39611] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-05 17:57:03,836 DEBUG [jenkins-hbase20:36181] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-05 17:57:03,837 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,39611,1685987823517, state=OPENING 2023-06-05 17:57:03,838 DEBUG [PEWorker-2] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-05 17:57:03,840 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:57:03,841 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,39611,1685987823517}] 2023-06-05 17:57:03,841 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-05 17:57:03,906 INFO [RS:0;jenkins-hbase20:39611] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C39611%2C1685987823517, suffix=, logDir=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/WALs/jenkins-hbase20.apache.org,39611,1685987823517, archiveDir=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/oldWALs, maxLogs=32 2023-06-05 17:57:03,917 INFO [RS:0;jenkins-hbase20:39611] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/WALs/jenkins-hbase20.apache.org,39611,1685987823517/jenkins-hbase20.apache.org%2C39611%2C1685987823517.1685987823907 2023-06-05 17:57:03,917 DEBUG [RS:0;jenkins-hbase20:39611] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36145,DS-28044e8d-fcd6-452c-99ab-f3a807e211b4,DISK], DatanodeInfoWithStorage[127.0.0.1:45559,DS-e636ed62-dc1c-467a-aec2-951d91a5fcb2,DISK]] 2023-06-05 17:57:03,998 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:57:03,998 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-05 17:57:04,003 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:47690, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-05 17:57:04,008 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-05 17:57:04,009 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-05 17:57:04,012 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C39611%2C1685987823517.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/WALs/jenkins-hbase20.apache.org,39611,1685987823517, archiveDir=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/oldWALs, maxLogs=32 2023-06-05 17:57:04,022 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/WALs/jenkins-hbase20.apache.org,39611,1685987823517/jenkins-hbase20.apache.org%2C39611%2C1685987823517.meta.1685987824013.meta 2023-06-05 17:57:04,022 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45559,DS-e636ed62-dc1c-467a-aec2-951d91a5fcb2,DISK], DatanodeInfoWithStorage[127.0.0.1:36145,DS-28044e8d-fcd6-452c-99ab-f3a807e211b4,DISK]] 2023-06-05 17:57:04,022 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-05 17:57:04,022 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-05 17:57:04,023 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-05 17:57:04,023 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-05 17:57:04,023 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-05 17:57:04,023 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:57:04,023 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-05 17:57:04,023 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-05 17:57:04,026 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-05 17:57:04,028 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740/info 2023-06-05 17:57:04,028 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740/info 2023-06-05 17:57:04,028 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-05 17:57:04,029 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:57:04,029 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-05 17:57:04,031 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740/rep_barrier 2023-06-05 17:57:04,031 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740/rep_barrier 2023-06-05 17:57:04,031 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-05 17:57:04,032 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:57:04,032 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-05 17:57:04,033 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740/table 2023-06-05 17:57:04,033 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740/table 2023-06-05 17:57:04,033 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-05 17:57:04,034 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:57:04,035 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740 2023-06-05 17:57:04,036 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740 2023-06-05 17:57:04,038 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-05 17:57:04,039 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-05 17:57:04,040 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=849353, jitterRate=0.08000870048999786}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-05 17:57:04,040 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-05 17:57:04,042 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685987823998 2023-06-05 17:57:04,046 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-05 17:57:04,047 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-05 17:57:04,048 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,39611,1685987823517, state=OPEN 2023-06-05 17:57:04,049 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-05 17:57:04,049 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-05 17:57:04,052 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-05 17:57:04,052 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,39611,1685987823517 in 208 msec 2023-06-05 17:57:04,054 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-05 17:57:04,054 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 369 msec 2023-06-05 17:57:04,056 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 413 msec 2023-06-05 17:57:04,056 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685987824056, completionTime=-1 2023-06-05 17:57:04,056 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-05 17:57:04,056 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-05 17:57:04,059 DEBUG [hconnection-0x59077784-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-05 17:57:04,061 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:47700, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-05 17:57:04,063 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-05 17:57:04,063 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685987884063 2023-06-05 17:57:04,063 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685987944063 2023-06-05 17:57:04,063 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-06-05 17:57:04,068 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36181,1685987823469-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-05 17:57:04,068 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36181,1685987823469-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-05 17:57:04,068 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36181,1685987823469-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-05 17:57:04,069 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:36181, period=300000, unit=MILLISECONDS is enabled. 2023-06-05 17:57:04,069 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-05 17:57:04,069 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-05 17:57:04,069 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-05 17:57:04,070 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-05 17:57:04,070 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-05 17:57:04,072 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-05 17:57:04,073 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-05 17:57:04,075 DEBUG [HFileArchiver-9] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/.tmp/data/hbase/namespace/d00dca3e6d91e9991f664a638a9a9405 2023-06-05 17:57:04,075 DEBUG [HFileArchiver-9] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/.tmp/data/hbase/namespace/d00dca3e6d91e9991f664a638a9a9405 empty. 2023-06-05 17:57:04,076 DEBUG [HFileArchiver-9] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/.tmp/data/hbase/namespace/d00dca3e6d91e9991f664a638a9a9405 2023-06-05 17:57:04,076 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-05 17:57:04,086 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-05 17:57:04,087 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => d00dca3e6d91e9991f664a638a9a9405, NAME => 'hbase:namespace,,1685987824069.d00dca3e6d91e9991f664a638a9a9405.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/.tmp 2023-06-05 17:57:04,094 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685987824069.d00dca3e6d91e9991f664a638a9a9405.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:57:04,094 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing d00dca3e6d91e9991f664a638a9a9405, disabling compactions & flushes 2023-06-05 17:57:04,094 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685987824069.d00dca3e6d91e9991f664a638a9a9405. 2023-06-05 17:57:04,094 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685987824069.d00dca3e6d91e9991f664a638a9a9405. 2023-06-05 17:57:04,094 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685987824069.d00dca3e6d91e9991f664a638a9a9405. after waiting 0 ms 2023-06-05 17:57:04,094 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685987824069.d00dca3e6d91e9991f664a638a9a9405. 2023-06-05 17:57:04,095 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685987824069.d00dca3e6d91e9991f664a638a9a9405. 2023-06-05 17:57:04,095 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for d00dca3e6d91e9991f664a638a9a9405: 2023-06-05 17:57:04,097 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-05 17:57:04,098 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685987824069.d00dca3e6d91e9991f664a638a9a9405.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685987824098"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685987824098"}]},"ts":"1685987824098"} 2023-06-05 17:57:04,101 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-05 17:57:04,102 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-05 17:57:04,102 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685987824102"}]},"ts":"1685987824102"} 2023-06-05 17:57:04,103 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-05 17:57:04,107 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=d00dca3e6d91e9991f664a638a9a9405, ASSIGN}] 2023-06-05 17:57:04,110 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=d00dca3e6d91e9991f664a638a9a9405, ASSIGN 2023-06-05 17:57:04,111 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=d00dca3e6d91e9991f664a638a9a9405, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,39611,1685987823517; forceNewPlan=false, retain=false 2023-06-05 17:57:04,263 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=d00dca3e6d91e9991f664a638a9a9405, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:57:04,263 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685987824069.d00dca3e6d91e9991f664a638a9a9405.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685987824263"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685987824263"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685987824263"}]},"ts":"1685987824263"} 2023-06-05 17:57:04,268 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure d00dca3e6d91e9991f664a638a9a9405, server=jenkins-hbase20.apache.org,39611,1685987823517}] 2023-06-05 17:57:04,433 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685987824069.d00dca3e6d91e9991f664a638a9a9405. 2023-06-05 17:57:04,434 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d00dca3e6d91e9991f664a638a9a9405, NAME => 'hbase:namespace,,1685987824069.d00dca3e6d91e9991f664a638a9a9405.', STARTKEY => '', ENDKEY => ''} 2023-06-05 17:57:04,434 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace d00dca3e6d91e9991f664a638a9a9405 2023-06-05 17:57:04,435 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685987824069.d00dca3e6d91e9991f664a638a9a9405.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:57:04,435 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for d00dca3e6d91e9991f664a638a9a9405 2023-06-05 17:57:04,435 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for d00dca3e6d91e9991f664a638a9a9405 2023-06-05 17:57:04,437 INFO [StoreOpener-d00dca3e6d91e9991f664a638a9a9405-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region d00dca3e6d91e9991f664a638a9a9405 2023-06-05 17:57:04,440 DEBUG [StoreOpener-d00dca3e6d91e9991f664a638a9a9405-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/namespace/d00dca3e6d91e9991f664a638a9a9405/info 2023-06-05 17:57:04,440 DEBUG [StoreOpener-d00dca3e6d91e9991f664a638a9a9405-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/namespace/d00dca3e6d91e9991f664a638a9a9405/info 2023-06-05 17:57:04,440 INFO [StoreOpener-d00dca3e6d91e9991f664a638a9a9405-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d00dca3e6d91e9991f664a638a9a9405 columnFamilyName info 2023-06-05 17:57:04,441 INFO [StoreOpener-d00dca3e6d91e9991f664a638a9a9405-1] regionserver.HStore(310): Store=d00dca3e6d91e9991f664a638a9a9405/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:57:04,443 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/namespace/d00dca3e6d91e9991f664a638a9a9405 2023-06-05 17:57:04,443 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/namespace/d00dca3e6d91e9991f664a638a9a9405 2023-06-05 17:57:04,448 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for d00dca3e6d91e9991f664a638a9a9405 2023-06-05 17:57:04,451 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/namespace/d00dca3e6d91e9991f664a638a9a9405/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-05 17:57:04,451 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened d00dca3e6d91e9991f664a638a9a9405; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=763938, jitterRate=-0.0286027193069458}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-05 17:57:04,451 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for d00dca3e6d91e9991f664a638a9a9405: 2023-06-05 17:57:04,454 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685987824069.d00dca3e6d91e9991f664a638a9a9405., pid=6, masterSystemTime=1685987824424 2023-06-05 17:57:04,457 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685987824069.d00dca3e6d91e9991f664a638a9a9405. 2023-06-05 17:57:04,457 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685987824069.d00dca3e6d91e9991f664a638a9a9405. 2023-06-05 17:57:04,458 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=d00dca3e6d91e9991f664a638a9a9405, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:57:04,458 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685987824069.d00dca3e6d91e9991f664a638a9a9405.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685987824458"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685987824458"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685987824458"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685987824458"}]},"ts":"1685987824458"} 2023-06-05 17:57:04,465 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-05 17:57:04,465 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure d00dca3e6d91e9991f664a638a9a9405, server=jenkins-hbase20.apache.org,39611,1685987823517 in 194 msec 2023-06-05 17:57:04,468 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-05 17:57:04,468 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=d00dca3e6d91e9991f664a638a9a9405, ASSIGN in 358 msec 2023-06-05 17:57:04,468 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-05 17:57:04,469 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685987824469"}]},"ts":"1685987824469"} 2023-06-05 17:57:04,470 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-05 17:57:04,471 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-05 17:57:04,472 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-05 17:57:04,472 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-05 17:57:04,472 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:57:04,474 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 403 msec 2023-06-05 17:57:04,476 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-05 17:57:04,491 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-05 17:57:04,494 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 18 msec 2023-06-05 17:57:04,498 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-05 17:57:04,507 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-05 17:57:04,510 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-06-05 17:57:04,525 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-05 17:57:04,526 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-05 17:57:04,526 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.990sec 2023-06-05 17:57:04,526 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-05 17:57:04,526 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-05 17:57:04,526 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-05 17:57:04,526 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36181,1685987823469-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-05 17:57:04,526 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36181,1685987823469-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-05 17:57:04,528 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-05 17:57:04,534 DEBUG [Listener at localhost.localdomain/37565] zookeeper.ReadOnlyZKClient(139): Connect 0x27dea06c to 127.0.0.1:57589 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-05 17:57:04,541 DEBUG [Listener at localhost.localdomain/37565] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7a93c388, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-05 17:57:04,543 DEBUG [hconnection-0x1191a066-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-05 17:57:04,544 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:47704, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-05 17:57:04,546 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,36181,1685987823469 2023-06-05 17:57:04,546 INFO [Listener at localhost.localdomain/37565] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:57:04,549 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-05 17:57:04,550 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:57:04,550 INFO [Listener at localhost.localdomain/37565] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-05 17:57:04,552 DEBUG [Listener at localhost.localdomain/37565] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-05 17:57:04,555 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:58494, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-05 17:57:04,557 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36181] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-05 17:57:04,557 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36181] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-05 17:57:04,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36181] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-05 17:57:04,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36181] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRolling 2023-06-05 17:57:04,562 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-06-05 17:57:04,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36181] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRolling" procId is: 9 2023-06-05 17:57:04,563 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-05 17:57:04,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36181] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-05 17:57:04,564 DEBUG [HFileArchiver-10] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/.tmp/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615 2023-06-05 17:57:04,565 DEBUG [HFileArchiver-10] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/.tmp/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615 empty. 2023-06-05 17:57:04,565 DEBUG [HFileArchiver-10] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/.tmp/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615 2023-06-05 17:57:04,565 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRolling regions 2023-06-05 17:57:04,577 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/.tmp/data/default/TestLogRolling-testLogRolling/.tabledesc/.tableinfo.0000000001 2023-06-05 17:57:04,578 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0ad739c8fb732e5fb54552adeb450615, NAME => 'TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/.tmp 2023-06-05 17:57:04,586 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:57:04,586 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1604): Closing 0ad739c8fb732e5fb54552adeb450615, disabling compactions & flushes 2023-06-05 17:57:04,586 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615. 2023-06-05 17:57:04,586 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615. 2023-06-05 17:57:04,586 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615. after waiting 0 ms 2023-06-05 17:57:04,586 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615. 2023-06-05 17:57:04,586 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615. 2023-06-05 17:57:04,586 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for 0ad739c8fb732e5fb54552adeb450615: 2023-06-05 17:57:04,595 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-06-05 17:57:04,596 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685987824596"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685987824596"}]},"ts":"1685987824596"} 2023-06-05 17:57:04,598 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-05 17:57:04,599 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-05 17:57:04,599 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685987824599"}]},"ts":"1685987824599"} 2023-06-05 17:57:04,600 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLING in hbase:meta 2023-06-05 17:57:04,603 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=0ad739c8fb732e5fb54552adeb450615, ASSIGN}] 2023-06-05 17:57:04,604 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=0ad739c8fb732e5fb54552adeb450615, ASSIGN 2023-06-05 17:57:04,605 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=0ad739c8fb732e5fb54552adeb450615, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,39611,1685987823517; forceNewPlan=false, retain=false 2023-06-05 17:57:04,756 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=0ad739c8fb732e5fb54552adeb450615, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:57:04,756 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685987824756"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685987824756"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685987824756"}]},"ts":"1685987824756"} 2023-06-05 17:57:04,758 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 0ad739c8fb732e5fb54552adeb450615, server=jenkins-hbase20.apache.org,39611,1685987823517}] 2023-06-05 17:57:04,917 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615. 2023-06-05 17:57:04,917 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0ad739c8fb732e5fb54552adeb450615, NAME => 'TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615.', STARTKEY => '', ENDKEY => ''} 2023-06-05 17:57:04,917 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 0ad739c8fb732e5fb54552adeb450615 2023-06-05 17:57:04,917 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:57:04,917 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 0ad739c8fb732e5fb54552adeb450615 2023-06-05 17:57:04,918 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 0ad739c8fb732e5fb54552adeb450615 2023-06-05 17:57:04,920 INFO [StoreOpener-0ad739c8fb732e5fb54552adeb450615-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 0ad739c8fb732e5fb54552adeb450615 2023-06-05 17:57:04,922 DEBUG [StoreOpener-0ad739c8fb732e5fb54552adeb450615-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info 2023-06-05 17:57:04,922 DEBUG [StoreOpener-0ad739c8fb732e5fb54552adeb450615-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info 2023-06-05 17:57:04,923 INFO [StoreOpener-0ad739c8fb732e5fb54552adeb450615-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0ad739c8fb732e5fb54552adeb450615 columnFamilyName info 2023-06-05 17:57:04,924 INFO [StoreOpener-0ad739c8fb732e5fb54552adeb450615-1] regionserver.HStore(310): Store=0ad739c8fb732e5fb54552adeb450615/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:57:04,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615 2023-06-05 17:57:04,926 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615 2023-06-05 17:57:04,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 0ad739c8fb732e5fb54552adeb450615 2023-06-05 17:57:04,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-05 17:57:04,931 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 0ad739c8fb732e5fb54552adeb450615; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=735471, jitterRate=-0.06480056047439575}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-05 17:57:04,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 0ad739c8fb732e5fb54552adeb450615: 2023-06-05 17:57:04,932 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615., pid=11, masterSystemTime=1685987824911 2023-06-05 17:57:04,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615. 2023-06-05 17:57:04,933 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615. 2023-06-05 17:57:04,934 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=0ad739c8fb732e5fb54552adeb450615, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:57:04,934 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685987824934"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685987824934"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685987824934"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685987824934"}]},"ts":"1685987824934"} 2023-06-05 17:57:04,938 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-05 17:57:04,938 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 0ad739c8fb732e5fb54552adeb450615, server=jenkins-hbase20.apache.org,39611,1685987823517 in 177 msec 2023-06-05 17:57:04,940 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-05 17:57:04,940 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=0ad739c8fb732e5fb54552adeb450615, ASSIGN in 336 msec 2023-06-05 17:57:04,941 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-05 17:57:04,941 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685987824941"}]},"ts":"1685987824941"} 2023-06-05 17:57:04,943 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLED in hbase:meta 2023-06-05 17:57:04,945 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-06-05 17:57:04,946 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRolling in 388 msec 2023-06-05 17:57:07,273 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-05 17:57:09,773 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-05 17:57:09,775 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-05 17:57:09,777 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRolling' 2023-06-05 17:57:14,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36181] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-05 17:57:14,564 INFO [Listener at localhost.localdomain/37565] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRolling, procId: 9 completed 2023-06-05 17:57:14,566 DEBUG [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRolling 2023-06-05 17:57:14,566 DEBUG [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615. 2023-06-05 17:57:14,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(9158): Flush requested on 0ad739c8fb732e5fb54552adeb450615 2023-06-05 17:57:14,584 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 0ad739c8fb732e5fb54552adeb450615 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-05 17:57:14,596 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/.tmp/info/7a26e8a8a41d4ad5b7d04f381868ec10 2023-06-05 17:57:14,604 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/.tmp/info/7a26e8a8a41d4ad5b7d04f381868ec10 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/7a26e8a8a41d4ad5b7d04f381868ec10 2023-06-05 17:57:14,610 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/7a26e8a8a41d4ad5b7d04f381868ec10, entries=7, sequenceid=11, filesize=12.1 K 2023-06-05 17:57:14,611 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=16.81 KB/17216 for 0ad739c8fb732e5fb54552adeb450615 in 27ms, sequenceid=11, compaction requested=false 2023-06-05 17:57:14,611 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 0ad739c8fb732e5fb54552adeb450615: 2023-06-05 17:57:14,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(9158): Flush requested on 0ad739c8fb732e5fb54552adeb450615 2023-06-05 17:57:14,612 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 0ad739c8fb732e5fb54552adeb450615 1/1 column families, dataSize=17.86 KB heapSize=19.38 KB 2023-06-05 17:57:14,624 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=17.86 KB at sequenceid=31 (bloomFilter=true), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/.tmp/info/e58f600d2dae4af8b7cc42d5dee000c6 2023-06-05 17:57:14,630 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/.tmp/info/e58f600d2dae4af8b7cc42d5dee000c6 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/e58f600d2dae4af8b7cc42d5dee000c6 2023-06-05 17:57:14,636 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/e58f600d2dae4af8b7cc42d5dee000c6, entries=17, sequenceid=31, filesize=22.6 K 2023-06-05 17:57:14,637 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~17.86 KB/18292, heapSize ~19.36 KB/19824, currentSize=8.41 KB/8608 for 0ad739c8fb732e5fb54552adeb450615 in 25ms, sequenceid=31, compaction requested=false 2023-06-05 17:57:14,637 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 0ad739c8fb732e5fb54552adeb450615: 2023-06-05 17:57:14,638 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=34.8 K, sizeToCheck=16.0 K 2023-06-05 17:57:14,638 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-05 17:57:14,638 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/e58f600d2dae4af8b7cc42d5dee000c6 because midkey is the same as first or last row 2023-06-05 17:57:16,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(9158): Flush requested on 0ad739c8fb732e5fb54552adeb450615 2023-06-05 17:57:16,628 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 0ad739c8fb732e5fb54552adeb450615 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-06-05 17:57:16,647 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=43 (bloomFilter=true), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/.tmp/info/3d504b6525cc4654b74502ab75438a73 2023-06-05 17:57:16,654 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/.tmp/info/3d504b6525cc4654b74502ab75438a73 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/3d504b6525cc4654b74502ab75438a73 2023-06-05 17:57:16,659 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=0ad739c8fb732e5fb54552adeb450615, server=jenkins-hbase20.apache.org,39611,1685987823517 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-05 17:57:16,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] ipc.CallRunner(144): callId: 62 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:47704 deadline: 1685987846659, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=0ad739c8fb732e5fb54552adeb450615, server=jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:57:16,661 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/3d504b6525cc4654b74502ab75438a73, entries=9, sequenceid=43, filesize=14.2 K 2023-06-05 17:57:16,661 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=21.02 KB/21520 for 0ad739c8fb732e5fb54552adeb450615 in 33ms, sequenceid=43, compaction requested=true 2023-06-05 17:57:16,662 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 0ad739c8fb732e5fb54552adeb450615: 2023-06-05 17:57:16,662 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=49.0 K, sizeToCheck=16.0 K 2023-06-05 17:57:16,662 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-05 17:57:16,662 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/e58f600d2dae4af8b7cc42d5dee000c6 because midkey is the same as first or last row 2023-06-05 17:57:16,662 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-05 17:57:16,662 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-05 17:57:16,664 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 50141 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-05 17:57:16,665 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1912): 0ad739c8fb732e5fb54552adeb450615/info is initiating minor compaction (all files) 2023-06-05 17:57:16,665 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 0ad739c8fb732e5fb54552adeb450615/info in TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615. 2023-06-05 17:57:16,665 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/7a26e8a8a41d4ad5b7d04f381868ec10, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/e58f600d2dae4af8b7cc42d5dee000c6, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/3d504b6525cc4654b74502ab75438a73] into tmpdir=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/.tmp, totalSize=49.0 K 2023-06-05 17:57:16,665 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting 7a26e8a8a41d4ad5b7d04f381868ec10, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1685987834570 2023-06-05 17:57:16,666 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting e58f600d2dae4af8b7cc42d5dee000c6, keycount=17, bloomtype=ROW, size=22.6 K, encoding=NONE, compression=NONE, seqNum=31, earliestPutTs=1685987834585 2023-06-05 17:57:16,666 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting 3d504b6525cc4654b74502ab75438a73, keycount=9, bloomtype=ROW, size=14.2 K, encoding=NONE, compression=NONE, seqNum=43, earliestPutTs=1685987834612 2023-06-05 17:57:16,679 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] throttle.PressureAwareThroughputController(145): 0ad739c8fb732e5fb54552adeb450615#info#compaction#29 average throughput is 33.86 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-05 17:57:16,700 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/.tmp/info/1c7f93d188074fffae96473c39f339fb as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/1c7f93d188074fffae96473c39f339fb 2023-06-05 17:57:16,708 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 0ad739c8fb732e5fb54552adeb450615/info of 0ad739c8fb732e5fb54552adeb450615 into 1c7f93d188074fffae96473c39f339fb(size=39.6 K), total size for store is 39.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-05 17:57:16,708 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 0ad739c8fb732e5fb54552adeb450615: 2023-06-05 17:57:16,708 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615., storeName=0ad739c8fb732e5fb54552adeb450615/info, priority=13, startTime=1685987836662; duration=0sec 2023-06-05 17:57:16,709 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=39.6 K, sizeToCheck=16.0 K 2023-06-05 17:57:16,709 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-05 17:57:16,709 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/1c7f93d188074fffae96473c39f339fb because midkey is the same as first or last row 2023-06-05 17:57:16,709 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-05 17:57:26,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(9158): Flush requested on 0ad739c8fb732e5fb54552adeb450615 2023-06-05 17:57:26,730 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 0ad739c8fb732e5fb54552adeb450615 1/1 column families, dataSize=22.07 KB heapSize=23.88 KB 2023-06-05 17:57:26,745 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=0ad739c8fb732e5fb54552adeb450615, server=jenkins-hbase20.apache.org,39611,1685987823517 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-05 17:57:26,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] ipc.CallRunner(144): callId: 73 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:47704 deadline: 1685987856745, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=0ad739c8fb732e5fb54552adeb450615, server=jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:57:26,751 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.07 KB at sequenceid=68 (bloomFilter=true), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/.tmp/info/c28d96f47bfa4c3f9997920b903af7b2 2023-06-05 17:57:26,757 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/.tmp/info/c28d96f47bfa4c3f9997920b903af7b2 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/c28d96f47bfa4c3f9997920b903af7b2 2023-06-05 17:57:26,761 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/c28d96f47bfa4c3f9997920b903af7b2, entries=21, sequenceid=68, filesize=26.9 K 2023-06-05 17:57:26,762 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~22.07 KB/22596, heapSize ~23.86 KB/24432, currentSize=8.41 KB/8608 for 0ad739c8fb732e5fb54552adeb450615 in 32ms, sequenceid=68, compaction requested=false 2023-06-05 17:57:26,762 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 0ad739c8fb732e5fb54552adeb450615: 2023-06-05 17:57:26,763 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=66.5 K, sizeToCheck=16.0 K 2023-06-05 17:57:26,763 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-05 17:57:26,763 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/1c7f93d188074fffae96473c39f339fb because midkey is the same as first or last row 2023-06-05 17:57:36,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(9158): Flush requested on 0ad739c8fb732e5fb54552adeb450615 2023-06-05 17:57:36,782 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 0ad739c8fb732e5fb54552adeb450615 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-06-05 17:57:36,797 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=80 (bloomFilter=true), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/.tmp/info/f43939beb48249a99b36108d36ac773f 2023-06-05 17:57:36,806 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/.tmp/info/f43939beb48249a99b36108d36ac773f as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/f43939beb48249a99b36108d36ac773f 2023-06-05 17:57:36,810 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/f43939beb48249a99b36108d36ac773f, entries=9, sequenceid=80, filesize=14.2 K 2023-06-05 17:57:36,811 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=1.05 KB/1076 for 0ad739c8fb732e5fb54552adeb450615 in 30ms, sequenceid=80, compaction requested=true 2023-06-05 17:57:36,811 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 0ad739c8fb732e5fb54552adeb450615: 2023-06-05 17:57:36,811 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=80.7 K, sizeToCheck=16.0 K 2023-06-05 17:57:36,811 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-05 17:57:36,812 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/1c7f93d188074fffae96473c39f339fb because midkey is the same as first or last row 2023-06-05 17:57:36,812 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-06-05 17:57:36,812 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-05 17:57:36,813 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 82610 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-05 17:57:36,813 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1912): 0ad739c8fb732e5fb54552adeb450615/info is initiating minor compaction (all files) 2023-06-05 17:57:36,813 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 0ad739c8fb732e5fb54552adeb450615/info in TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615. 2023-06-05 17:57:36,813 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/1c7f93d188074fffae96473c39f339fb, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/c28d96f47bfa4c3f9997920b903af7b2, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/f43939beb48249a99b36108d36ac773f] into tmpdir=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/.tmp, totalSize=80.7 K 2023-06-05 17:57:36,813 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting 1c7f93d188074fffae96473c39f339fb, keycount=33, bloomtype=ROW, size=39.6 K, encoding=NONE, compression=NONE, seqNum=43, earliestPutTs=1685987834570 2023-06-05 17:57:36,814 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting c28d96f47bfa4c3f9997920b903af7b2, keycount=21, bloomtype=ROW, size=26.9 K, encoding=NONE, compression=NONE, seqNum=68, earliestPutTs=1685987836629 2023-06-05 17:57:36,814 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting f43939beb48249a99b36108d36ac773f, keycount=9, bloomtype=ROW, size=14.2 K, encoding=NONE, compression=NONE, seqNum=80, earliestPutTs=1685987846730 2023-06-05 17:57:36,826 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] throttle.PressureAwareThroughputController(145): 0ad739c8fb732e5fb54552adeb450615#info#compaction#32 average throughput is 32.32 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-05 17:57:36,842 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/.tmp/info/cb14a5b0059b47d984f322a45eef8b26 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/cb14a5b0059b47d984f322a45eef8b26 2023-06-05 17:57:36,848 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 0ad739c8fb732e5fb54552adeb450615/info of 0ad739c8fb732e5fb54552adeb450615 into cb14a5b0059b47d984f322a45eef8b26(size=71.4 K), total size for store is 71.4 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-05 17:57:36,848 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 0ad739c8fb732e5fb54552adeb450615: 2023-06-05 17:57:36,848 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615., storeName=0ad739c8fb732e5fb54552adeb450615/info, priority=13, startTime=1685987856812; duration=0sec 2023-06-05 17:57:36,848 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=71.4 K, sizeToCheck=16.0 K 2023-06-05 17:57:36,848 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-05 17:57:36,849 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.CompactSplit(227): Splitting TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615., compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-05 17:57:36,849 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-05 17:57:36,850 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36181] assignment.AssignmentManager(1140): Split request from jenkins-hbase20.apache.org,39611,1685987823517, parent={ENCODED => 0ad739c8fb732e5fb54552adeb450615, NAME => 'TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615.', STARTKEY => '', ENDKEY => ''} splitKey=row0062 2023-06-05 17:57:36,855 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36181] assignment.SplitTableRegionProcedure(219): Splittable=true state=OPEN, location=jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:57:36,860 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36181] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=0ad739c8fb732e5fb54552adeb450615, daughterA=dbc39d0b055e39c976b30dd415068324, daughterB=017e9c207086433be1cb4738e64b5220 2023-06-05 17:57:36,861 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=0ad739c8fb732e5fb54552adeb450615, daughterA=dbc39d0b055e39c976b30dd415068324, daughterB=017e9c207086433be1cb4738e64b5220 2023-06-05 17:57:36,861 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=0ad739c8fb732e5fb54552adeb450615, daughterA=dbc39d0b055e39c976b30dd415068324, daughterB=017e9c207086433be1cb4738e64b5220 2023-06-05 17:57:36,861 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=0ad739c8fb732e5fb54552adeb450615, daughterA=dbc39d0b055e39c976b30dd415068324, daughterB=017e9c207086433be1cb4738e64b5220 2023-06-05 17:57:36,869 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=0ad739c8fb732e5fb54552adeb450615, UNASSIGN}] 2023-06-05 17:57:36,871 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=0ad739c8fb732e5fb54552adeb450615, UNASSIGN 2023-06-05 17:57:36,872 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=0ad739c8fb732e5fb54552adeb450615, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:57:36,872 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685987856872"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685987856872"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685987856872"}]},"ts":"1685987856872"} 2023-06-05 17:57:36,874 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; CloseRegionProcedure 0ad739c8fb732e5fb54552adeb450615, server=jenkins-hbase20.apache.org,39611,1685987823517}] 2023-06-05 17:57:37,037 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 0ad739c8fb732e5fb54552adeb450615 2023-06-05 17:57:37,037 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 0ad739c8fb732e5fb54552adeb450615, disabling compactions & flushes 2023-06-05 17:57:37,037 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615. 2023-06-05 17:57:37,037 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615. 2023-06-05 17:57:37,037 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615. after waiting 0 ms 2023-06-05 17:57:37,037 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615. 2023-06-05 17:57:37,037 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 0ad739c8fb732e5fb54552adeb450615 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-05 17:57:37,053 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=85 (bloomFilter=true), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/.tmp/info/d7beb01b882d4f72a44a2d1fcb8d6852 2023-06-05 17:57:37,059 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/.tmp/info/d7beb01b882d4f72a44a2d1fcb8d6852 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/d7beb01b882d4f72a44a2d1fcb8d6852 2023-06-05 17:57:37,065 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/d7beb01b882d4f72a44a2d1fcb8d6852, entries=1, sequenceid=85, filesize=5.8 K 2023-06-05 17:57:37,066 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 0ad739c8fb732e5fb54552adeb450615 in 29ms, sequenceid=85, compaction requested=false 2023-06-05 17:57:37,072 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/7a26e8a8a41d4ad5b7d04f381868ec10, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/e58f600d2dae4af8b7cc42d5dee000c6, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/1c7f93d188074fffae96473c39f339fb, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/3d504b6525cc4654b74502ab75438a73, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/c28d96f47bfa4c3f9997920b903af7b2, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/f43939beb48249a99b36108d36ac773f] to archive 2023-06-05 17:57:37,073 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-05 17:57:37,075 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/7a26e8a8a41d4ad5b7d04f381868ec10 to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/7a26e8a8a41d4ad5b7d04f381868ec10 2023-06-05 17:57:37,076 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/e58f600d2dae4af8b7cc42d5dee000c6 to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/e58f600d2dae4af8b7cc42d5dee000c6 2023-06-05 17:57:37,077 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/1c7f93d188074fffae96473c39f339fb to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/1c7f93d188074fffae96473c39f339fb 2023-06-05 17:57:37,078 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/3d504b6525cc4654b74502ab75438a73 to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/3d504b6525cc4654b74502ab75438a73 2023-06-05 17:57:37,079 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/c28d96f47bfa4c3f9997920b903af7b2 to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/c28d96f47bfa4c3f9997920b903af7b2 2023-06-05 17:57:37,081 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/f43939beb48249a99b36108d36ac773f to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/f43939beb48249a99b36108d36ac773f 2023-06-05 17:57:37,088 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/recovered.edits/88.seqid, newMaxSeqId=88, maxSeqId=1 2023-06-05 17:57:37,090 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615. 2023-06-05 17:57:37,090 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 0ad739c8fb732e5fb54552adeb450615: 2023-06-05 17:57:37,092 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 0ad739c8fb732e5fb54552adeb450615 2023-06-05 17:57:37,092 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=0ad739c8fb732e5fb54552adeb450615, regionState=CLOSED 2023-06-05 17:57:37,092 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685987857092"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685987857092"}]},"ts":"1685987857092"} 2023-06-05 17:57:37,095 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-06-05 17:57:37,096 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; CloseRegionProcedure 0ad739c8fb732e5fb54552adeb450615, server=jenkins-hbase20.apache.org,39611,1685987823517 in 220 msec 2023-06-05 17:57:37,097 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-06-05 17:57:37,097 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=0ad739c8fb732e5fb54552adeb450615, UNASSIGN in 227 msec 2023-06-05 17:57:37,107 INFO [PEWorker-1] assignment.SplitTableRegionProcedure(694): pid=12 splitting 2 storefiles, region=0ad739c8fb732e5fb54552adeb450615, threads=2 2023-06-05 17:57:37,109 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/cb14a5b0059b47d984f322a45eef8b26 for region: 0ad739c8fb732e5fb54552adeb450615 2023-06-05 17:57:37,109 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/d7beb01b882d4f72a44a2d1fcb8d6852 for region: 0ad739c8fb732e5fb54552adeb450615 2023-06-05 17:57:37,119 DEBUG [StoreFileSplitter-pool-1] regionserver.HRegionFileSystem(700): Will create HFileLink file for hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/d7beb01b882d4f72a44a2d1fcb8d6852, top=true 2023-06-05 17:57:37,130 INFO [StoreFileSplitter-pool-1] regionserver.HRegionFileSystem(742): Created linkFile:hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/.splits/017e9c207086433be1cb4738e64b5220/info/TestLogRolling-testLogRolling=0ad739c8fb732e5fb54552adeb450615-d7beb01b882d4f72a44a2d1fcb8d6852 for child: 017e9c207086433be1cb4738e64b5220, parent: 0ad739c8fb732e5fb54552adeb450615 2023-06-05 17:57:37,130 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/d7beb01b882d4f72a44a2d1fcb8d6852 for region: 0ad739c8fb732e5fb54552adeb450615 2023-06-05 17:57:37,148 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/cb14a5b0059b47d984f322a45eef8b26 for region: 0ad739c8fb732e5fb54552adeb450615 2023-06-05 17:57:37,148 DEBUG [PEWorker-1] assignment.SplitTableRegionProcedure(755): pid=12 split storefiles for region 0ad739c8fb732e5fb54552adeb450615 Daughter A: 1 storefiles, Daughter B: 2 storefiles. 2023-06-05 17:57:37,174 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/dbc39d0b055e39c976b30dd415068324/recovered.edits/88.seqid, newMaxSeqId=88, maxSeqId=-1 2023-06-05 17:57:37,176 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/recovered.edits/88.seqid, newMaxSeqId=88, maxSeqId=-1 2023-06-05 17:57:37,179 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685987857178"},{"qualifier":"splitA","vlen":70,"tag":[],"timestamp":"1685987857178"},{"qualifier":"splitB","vlen":70,"tag":[],"timestamp":"1685987857178"}]},"ts":"1685987857178"} 2023-06-05 17:57:37,179 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685987856855.dbc39d0b055e39c976b30dd415068324.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685987857178"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685987857178"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685987857178"}]},"ts":"1685987857178"} 2023-06-05 17:57:37,179 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685987857178"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685987857178"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685987857178"}]},"ts":"1685987857178"} 2023-06-05 17:57:37,212 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39611] regionserver.HRegion(9158): Flush requested on 1588230740 2023-06-05 17:57:37,212 DEBUG [MemStoreFlusher.0] regionserver.FlushAllLargeStoresPolicy(69): Since none of the CFs were above the size, flushing all. 2023-06-05 17:57:37,213 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.82 KB heapSize=8.36 KB 2023-06-05 17:57:37,222 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=dbc39d0b055e39c976b30dd415068324, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=017e9c207086433be1cb4738e64b5220, ASSIGN}] 2023-06-05 17:57:37,223 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=dbc39d0b055e39c976b30dd415068324, ASSIGN 2023-06-05 17:57:37,223 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=017e9c207086433be1cb4738e64b5220, ASSIGN 2023-06-05 17:57:37,223 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.61 KB at sequenceid=17 (bloomFilter=false), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740/.tmp/info/241e53fe33594bfd9ba02aaf21e5d039 2023-06-05 17:57:37,224 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=dbc39d0b055e39c976b30dd415068324, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase20.apache.org,39611,1685987823517; forceNewPlan=false, retain=false 2023-06-05 17:57:37,224 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=017e9c207086433be1cb4738e64b5220, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase20.apache.org,39611,1685987823517; forceNewPlan=false, retain=false 2023-06-05 17:57:37,237 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=216 B at sequenceid=17 (bloomFilter=false), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740/.tmp/table/b17c12d9882f44e1bfcf0e741c9e7678 2023-06-05 17:57:37,243 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740/.tmp/info/241e53fe33594bfd9ba02aaf21e5d039 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740/info/241e53fe33594bfd9ba02aaf21e5d039 2023-06-05 17:57:37,248 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740/info/241e53fe33594bfd9ba02aaf21e5d039, entries=29, sequenceid=17, filesize=8.6 K 2023-06-05 17:57:37,249 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740/.tmp/table/b17c12d9882f44e1bfcf0e741c9e7678 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740/table/b17c12d9882f44e1bfcf0e741c9e7678 2023-06-05 17:57:37,253 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740/table/b17c12d9882f44e1bfcf0e741c9e7678, entries=4, sequenceid=17, filesize=4.8 K 2023-06-05 17:57:37,254 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~4.82 KB/4939, heapSize ~8.08 KB/8272, currentSize=0 B/0 for 1588230740 in 42ms, sequenceid=17, compaction requested=false 2023-06-05 17:57:37,255 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-06-05 17:57:37,377 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=dbc39d0b055e39c976b30dd415068324, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:57:37,377 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=017e9c207086433be1cb4738e64b5220, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:57:37,377 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685987856855.dbc39d0b055e39c976b30dd415068324.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685987857377"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685987857377"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685987857377"}]},"ts":"1685987857377"} 2023-06-05 17:57:37,377 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685987857377"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685987857377"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685987857377"}]},"ts":"1685987857377"} 2023-06-05 17:57:37,380 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE; OpenRegionProcedure dbc39d0b055e39c976b30dd415068324, server=jenkins-hbase20.apache.org,39611,1685987823517}] 2023-06-05 17:57:37,382 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=16, state=RUNNABLE; OpenRegionProcedure 017e9c207086433be1cb4738e64b5220, server=jenkins-hbase20.apache.org,39611,1685987823517}] 2023-06-05 17:57:37,540 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1685987856855.dbc39d0b055e39c976b30dd415068324. 2023-06-05 17:57:37,541 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dbc39d0b055e39c976b30dd415068324, NAME => 'TestLogRolling-testLogRolling,,1685987856855.dbc39d0b055e39c976b30dd415068324.', STARTKEY => '', ENDKEY => 'row0062'} 2023-06-05 17:57:37,541 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling dbc39d0b055e39c976b30dd415068324 2023-06-05 17:57:37,542 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685987856855.dbc39d0b055e39c976b30dd415068324.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:57:37,542 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for dbc39d0b055e39c976b30dd415068324 2023-06-05 17:57:37,542 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for dbc39d0b055e39c976b30dd415068324 2023-06-05 17:57:37,545 INFO [StoreOpener-dbc39d0b055e39c976b30dd415068324-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region dbc39d0b055e39c976b30dd415068324 2023-06-05 17:57:37,547 DEBUG [StoreOpener-dbc39d0b055e39c976b30dd415068324-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/dbc39d0b055e39c976b30dd415068324/info 2023-06-05 17:57:37,547 DEBUG [StoreOpener-dbc39d0b055e39c976b30dd415068324-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/dbc39d0b055e39c976b30dd415068324/info 2023-06-05 17:57:37,548 INFO [StoreOpener-dbc39d0b055e39c976b30dd415068324-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dbc39d0b055e39c976b30dd415068324 columnFamilyName info 2023-06-05 17:57:37,565 DEBUG [StoreOpener-dbc39d0b055e39c976b30dd415068324-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/dbc39d0b055e39c976b30dd415068324/info/cb14a5b0059b47d984f322a45eef8b26.0ad739c8fb732e5fb54552adeb450615->hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/cb14a5b0059b47d984f322a45eef8b26-bottom 2023-06-05 17:57:37,566 INFO [StoreOpener-dbc39d0b055e39c976b30dd415068324-1] regionserver.HStore(310): Store=dbc39d0b055e39c976b30dd415068324/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:57:37,567 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/dbc39d0b055e39c976b30dd415068324 2023-06-05 17:57:37,568 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/dbc39d0b055e39c976b30dd415068324 2023-06-05 17:57:37,571 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for dbc39d0b055e39c976b30dd415068324 2023-06-05 17:57:37,571 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened dbc39d0b055e39c976b30dd415068324; next sequenceid=89; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=711710, jitterRate=-0.09501394629478455}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-05 17:57:37,571 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for dbc39d0b055e39c976b30dd415068324: 2023-06-05 17:57:37,572 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1685987856855.dbc39d0b055e39c976b30dd415068324., pid=17, masterSystemTime=1685987857534 2023-06-05 17:57:37,572 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-06-05 17:57:37,573 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 1 store files, 0 compacting, 1 eligible, 16 blocking 2023-06-05 17:57:37,574 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,,1685987856855.dbc39d0b055e39c976b30dd415068324. 2023-06-05 17:57:37,574 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1912): dbc39d0b055e39c976b30dd415068324/info is initiating minor compaction (all files) 2023-06-05 17:57:37,574 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of dbc39d0b055e39c976b30dd415068324/info in TestLogRolling-testLogRolling,,1685987856855.dbc39d0b055e39c976b30dd415068324. 2023-06-05 17:57:37,574 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/dbc39d0b055e39c976b30dd415068324/info/cb14a5b0059b47d984f322a45eef8b26.0ad739c8fb732e5fb54552adeb450615->hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/cb14a5b0059b47d984f322a45eef8b26-bottom] into tmpdir=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/dbc39d0b055e39c976b30dd415068324/.tmp, totalSize=71.4 K 2023-06-05 17:57:37,574 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting cb14a5b0059b47d984f322a45eef8b26.0ad739c8fb732e5fb54552adeb450615, keycount=31, bloomtype=ROW, size=71.4 K, encoding=NONE, compression=NONE, seqNum=80, earliestPutTs=1685987834570 2023-06-05 17:57:37,575 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1685987856855.dbc39d0b055e39c976b30dd415068324. 2023-06-05 17:57:37,575 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1685987856855.dbc39d0b055e39c976b30dd415068324. 2023-06-05 17:57:37,575 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220. 2023-06-05 17:57:37,575 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 017e9c207086433be1cb4738e64b5220, NAME => 'TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.', STARTKEY => 'row0062', ENDKEY => ''} 2023-06-05 17:57:37,575 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 017e9c207086433be1cb4738e64b5220 2023-06-05 17:57:37,575 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:57:37,575 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 017e9c207086433be1cb4738e64b5220 2023-06-05 17:57:37,575 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=dbc39d0b055e39c976b30dd415068324, regionState=OPEN, openSeqNum=89, regionLocation=jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:57:37,575 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 017e9c207086433be1cb4738e64b5220 2023-06-05 17:57:37,575 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1685987856855.dbc39d0b055e39c976b30dd415068324.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685987857575"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685987857575"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685987857575"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685987857575"}]},"ts":"1685987857575"} 2023-06-05 17:57:37,577 INFO [StoreOpener-017e9c207086433be1cb4738e64b5220-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 017e9c207086433be1cb4738e64b5220 2023-06-05 17:57:37,578 DEBUG [StoreOpener-017e9c207086433be1cb4738e64b5220-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info 2023-06-05 17:57:37,578 DEBUG [StoreOpener-017e9c207086433be1cb4738e64b5220-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info 2023-06-05 17:57:37,579 INFO [StoreOpener-017e9c207086433be1cb4738e64b5220-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 017e9c207086433be1cb4738e64b5220 columnFamilyName info 2023-06-05 17:57:37,580 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-06-05 17:57:37,580 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; OpenRegionProcedure dbc39d0b055e39c976b30dd415068324, server=jenkins-hbase20.apache.org,39611,1685987823517 in 197 msec 2023-06-05 17:57:37,582 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=dbc39d0b055e39c976b30dd415068324, ASSIGN in 358 msec 2023-06-05 17:57:37,583 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] throttle.PressureAwareThroughputController(145): dbc39d0b055e39c976b30dd415068324#info#compaction#36 average throughput is 15.65 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-05 17:57:37,590 DEBUG [StoreOpener-017e9c207086433be1cb4738e64b5220-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/TestLogRolling-testLogRolling=0ad739c8fb732e5fb54552adeb450615-d7beb01b882d4f72a44a2d1fcb8d6852 2023-06-05 17:57:37,596 DEBUG [StoreOpener-017e9c207086433be1cb4738e64b5220-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/cb14a5b0059b47d984f322a45eef8b26.0ad739c8fb732e5fb54552adeb450615->hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/cb14a5b0059b47d984f322a45eef8b26-top 2023-06-05 17:57:37,596 INFO [StoreOpener-017e9c207086433be1cb4738e64b5220-1] regionserver.HStore(310): Store=017e9c207086433be1cb4738e64b5220/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:57:37,597 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220 2023-06-05 17:57:37,598 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/dbc39d0b055e39c976b30dd415068324/.tmp/info/a9e2d5c8bcd64d59980546f5b560b805 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/dbc39d0b055e39c976b30dd415068324/info/a9e2d5c8bcd64d59980546f5b560b805 2023-06-05 17:57:37,598 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220 2023-06-05 17:57:37,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 017e9c207086433be1cb4738e64b5220 2023-06-05 17:57:37,601 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 017e9c207086433be1cb4738e64b5220; next sequenceid=89; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=858057, jitterRate=0.09107597172260284}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-05 17:57:37,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:57:37,602 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220., pid=18, masterSystemTime=1685987857534 2023-06-05 17:57:37,602 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-06-05 17:57:37,604 DEBUG [RS:0;jenkins-hbase20:39611-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 2 store files, 0 compacting, 2 eligible, 16 blocking 2023-06-05 17:57:37,605 INFO [RS:0;jenkins-hbase20:39611-longCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220. 2023-06-05 17:57:37,605 DEBUG [RS:0;jenkins-hbase20:39611-longCompactions-0] regionserver.HStore(1912): 017e9c207086433be1cb4738e64b5220/info is initiating minor compaction (all files) 2023-06-05 17:57:37,605 INFO [RS:0;jenkins-hbase20:39611-longCompactions-0] regionserver.HRegion(2259): Starting compaction of 017e9c207086433be1cb4738e64b5220/info in TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220. 2023-06-05 17:57:37,606 INFO [RS:0;jenkins-hbase20:39611-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/cb14a5b0059b47d984f322a45eef8b26.0ad739c8fb732e5fb54552adeb450615->hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/cb14a5b0059b47d984f322a45eef8b26-top, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/TestLogRolling-testLogRolling=0ad739c8fb732e5fb54552adeb450615-d7beb01b882d4f72a44a2d1fcb8d6852] into tmpdir=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp, totalSize=77.2 K 2023-06-05 17:57:37,606 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 1 (all) file(s) in dbc39d0b055e39c976b30dd415068324/info of dbc39d0b055e39c976b30dd415068324 into a9e2d5c8bcd64d59980546f5b560b805(size=69.1 K), total size for store is 69.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-05 17:57:37,606 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220. 2023-06-05 17:57:37,606 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220. 2023-06-05 17:57:37,606 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for dbc39d0b055e39c976b30dd415068324: 2023-06-05 17:57:37,606 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685987856855.dbc39d0b055e39c976b30dd415068324., storeName=dbc39d0b055e39c976b30dd415068324/info, priority=15, startTime=1685987857572; duration=0sec 2023-06-05 17:57:37,606 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-05 17:57:37,606 DEBUG [RS:0;jenkins-hbase20:39611-longCompactions-0] compactions.Compactor(207): Compacting cb14a5b0059b47d984f322a45eef8b26.0ad739c8fb732e5fb54552adeb450615, keycount=31, bloomtype=ROW, size=71.4 K, encoding=NONE, compression=NONE, seqNum=81, earliestPutTs=1685987834570 2023-06-05 17:57:37,606 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=017e9c207086433be1cb4738e64b5220, regionState=OPEN, openSeqNum=89, regionLocation=jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:57:37,607 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685987857606"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685987857606"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685987857606"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685987857606"}]},"ts":"1685987857606"} 2023-06-05 17:57:37,607 DEBUG [RS:0;jenkins-hbase20:39611-longCompactions-0] compactions.Compactor(207): Compacting TestLogRolling-testLogRolling=0ad739c8fb732e5fb54552adeb450615-d7beb01b882d4f72a44a2d1fcb8d6852, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=85, earliestPutTs=1685987856784 2023-06-05 17:57:37,610 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=16 2023-06-05 17:57:37,610 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=16, state=SUCCESS; OpenRegionProcedure 017e9c207086433be1cb4738e64b5220, server=jenkins-hbase20.apache.org,39611,1685987823517 in 226 msec 2023-06-05 17:57:37,612 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=12 2023-06-05 17:57:37,612 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=017e9c207086433be1cb4738e64b5220, ASSIGN in 388 msec 2023-06-05 17:57:37,613 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=0ad739c8fb732e5fb54552adeb450615, daughterA=dbc39d0b055e39c976b30dd415068324, daughterB=017e9c207086433be1cb4738e64b5220 in 757 msec 2023-06-05 17:57:37,614 INFO [RS:0;jenkins-hbase20:39611-longCompactions-0] throttle.PressureAwareThroughputController(145): 017e9c207086433be1cb4738e64b5220#info#compaction#37 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-05 17:57:37,634 DEBUG [RS:0;jenkins-hbase20:39611-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/e4136a68e3814955b6acd771691d5e67 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/e4136a68e3814955b6acd771691d5e67 2023-06-05 17:57:37,641 INFO [RS:0;jenkins-hbase20:39611-longCompactions-0] regionserver.HStore(1652): Completed compaction of 2 (all) file(s) in 017e9c207086433be1cb4738e64b5220/info of 017e9c207086433be1cb4738e64b5220 into e4136a68e3814955b6acd771691d5e67(size=8.1 K), total size for store is 8.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-05 17:57:37,641 DEBUG [RS:0;jenkins-hbase20:39611-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:57:37,641 INFO [RS:0;jenkins-hbase20:39611-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220., storeName=017e9c207086433be1cb4738e64b5220/info, priority=14, startTime=1685987857602; duration=0sec 2023-06-05 17:57:37,641 DEBUG [RS:0;jenkins-hbase20:39611-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-05 17:57:38,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] ipc.CallRunner(144): callId: 77 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:47704 deadline: 1685987868787, exception=org.apache.hadoop.hbase.NotServingRegionException: TestLogRolling-testLogRolling,,1685987824557.0ad739c8fb732e5fb54552adeb450615. is not online on jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:57:42,648 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-05 17:57:48,210 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): data stats (chunk size=2097152): current pool size=3, created chunk count=13, reused chunk count=29, reuseRatio=69.05% 2023-06-05 17:57:48,211 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2023-06-05 17:57:48,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(9158): Flush requested on 017e9c207086433be1cb4738e64b5220 2023-06-05 17:57:48,908 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 017e9c207086433be1cb4738e64b5220 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-05 17:57:48,925 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=99 (bloomFilter=true), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/6f7d4ecfed4c4772a8b15366ae336853 2023-06-05 17:57:48,931 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/6f7d4ecfed4c4772a8b15366ae336853 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/6f7d4ecfed4c4772a8b15366ae336853 2023-06-05 17:57:48,936 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/6f7d4ecfed4c4772a8b15366ae336853, entries=7, sequenceid=99, filesize=12.1 K 2023-06-05 17:57:48,937 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=22.07 KB/22596 for 017e9c207086433be1cb4738e64b5220 in 29ms, sequenceid=99, compaction requested=false 2023-06-05 17:57:48,938 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:57:48,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(9158): Flush requested on 017e9c207086433be1cb4738e64b5220 2023-06-05 17:57:48,939 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 017e9c207086433be1cb4738e64b5220 1/1 column families, dataSize=23.12 KB heapSize=25 KB 2023-06-05 17:57:48,949 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=23.12 KB at sequenceid=124 (bloomFilter=true), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/1f7e84751e184ba9bfa887d5a1dfb0a0 2023-06-05 17:57:48,954 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/1f7e84751e184ba9bfa887d5a1dfb0a0 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/1f7e84751e184ba9bfa887d5a1dfb0a0 2023-06-05 17:57:48,959 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/1f7e84751e184ba9bfa887d5a1dfb0a0, entries=22, sequenceid=124, filesize=27.9 K 2023-06-05 17:57:48,959 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~23.12 KB/23672, heapSize ~24.98 KB/25584, currentSize=3.15 KB/3228 for 017e9c207086433be1cb4738e64b5220 in 20ms, sequenceid=124, compaction requested=true 2023-06-05 17:57:48,960 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:57:48,960 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-06-05 17:57:48,960 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-05 17:57:48,961 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 49222 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-05 17:57:48,961 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1912): 017e9c207086433be1cb4738e64b5220/info is initiating minor compaction (all files) 2023-06-05 17:57:48,961 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 017e9c207086433be1cb4738e64b5220/info in TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220. 2023-06-05 17:57:48,961 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/e4136a68e3814955b6acd771691d5e67, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/6f7d4ecfed4c4772a8b15366ae336853, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/1f7e84751e184ba9bfa887d5a1dfb0a0] into tmpdir=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp, totalSize=48.1 K 2023-06-05 17:57:48,961 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting e4136a68e3814955b6acd771691d5e67, keycount=3, bloomtype=ROW, size=8.1 K, encoding=NONE, compression=NONE, seqNum=85, earliestPutTs=1685987846744 2023-06-05 17:57:48,962 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting 6f7d4ecfed4c4772a8b15366ae336853, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=99, earliestPutTs=1685987868900 2023-06-05 17:57:48,962 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting 1f7e84751e184ba9bfa887d5a1dfb0a0, keycount=22, bloomtype=ROW, size=27.9 K, encoding=NONE, compression=NONE, seqNum=124, earliestPutTs=1685987868908 2023-06-05 17:57:48,972 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] throttle.PressureAwareThroughputController(145): 017e9c207086433be1cb4738e64b5220#info#compaction#40 average throughput is 32.84 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-05 17:57:48,985 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/d8eac2da16964e91ae28c1c2cfc149a1 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/d8eac2da16964e91ae28c1c2cfc149a1 2023-06-05 17:57:48,992 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 017e9c207086433be1cb4738e64b5220/info of 017e9c207086433be1cb4738e64b5220 into d8eac2da16964e91ae28c1c2cfc149a1(size=38.7 K), total size for store is 38.7 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-05 17:57:48,992 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:57:48,992 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220., storeName=017e9c207086433be1cb4738e64b5220/info, priority=13, startTime=1685987868960; duration=0sec 2023-06-05 17:57:48,992 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-05 17:57:50,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(9158): Flush requested on 017e9c207086433be1cb4738e64b5220 2023-06-05 17:57:50,956 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 017e9c207086433be1cb4738e64b5220 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-05 17:57:50,971 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=135 (bloomFilter=true), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/0cbccfc8d146452cbe6c21485f688bfc 2023-06-05 17:57:50,977 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/0cbccfc8d146452cbe6c21485f688bfc as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/0cbccfc8d146452cbe6c21485f688bfc 2023-06-05 17:57:50,984 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/0cbccfc8d146452cbe6c21485f688bfc, entries=7, sequenceid=135, filesize=12.1 K 2023-06-05 17:57:50,985 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=16.81 KB/17216 for 017e9c207086433be1cb4738e64b5220 in 29ms, sequenceid=135, compaction requested=false 2023-06-05 17:57:50,985 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:57:50,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(9158): Flush requested on 017e9c207086433be1cb4738e64b5220 2023-06-05 17:57:50,986 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 017e9c207086433be1cb4738e64b5220 1/1 column families, dataSize=17.86 KB heapSize=19.38 KB 2023-06-05 17:57:51,004 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=17.86 KB at sequenceid=155 (bloomFilter=true), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/1bb9f5c320ba48208f3819e09619d73d 2023-06-05 17:57:51,011 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/1bb9f5c320ba48208f3819e09619d73d as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/1bb9f5c320ba48208f3819e09619d73d 2023-06-05 17:57:51,018 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/1bb9f5c320ba48208f3819e09619d73d, entries=17, sequenceid=155, filesize=22.7 K 2023-06-05 17:57:51,019 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~17.86 KB/18292, heapSize ~19.36 KB/19824, currentSize=11.56 KB/11836 for 017e9c207086433be1cb4738e64b5220 in 33ms, sequenceid=155, compaction requested=true 2023-06-05 17:57:51,019 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:57:51,019 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-05 17:57:51,019 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-05 17:57:51,021 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 75258 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-05 17:57:51,021 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1912): 017e9c207086433be1cb4738e64b5220/info is initiating minor compaction (all files) 2023-06-05 17:57:51,021 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 017e9c207086433be1cb4738e64b5220/info in TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220. 2023-06-05 17:57:51,021 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/d8eac2da16964e91ae28c1c2cfc149a1, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/0cbccfc8d146452cbe6c21485f688bfc, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/1bb9f5c320ba48208f3819e09619d73d] into tmpdir=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp, totalSize=73.5 K 2023-06-05 17:57:51,021 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting d8eac2da16964e91ae28c1c2cfc149a1, keycount=32, bloomtype=ROW, size=38.7 K, encoding=NONE, compression=NONE, seqNum=124, earliestPutTs=1685987846744 2023-06-05 17:57:51,022 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting 0cbccfc8d146452cbe6c21485f688bfc, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=135, earliestPutTs=1685987868939 2023-06-05 17:57:51,022 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting 1bb9f5c320ba48208f3819e09619d73d, keycount=17, bloomtype=ROW, size=22.7 K, encoding=NONE, compression=NONE, seqNum=155, earliestPutTs=1685987870957 2023-06-05 17:57:51,032 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] throttle.PressureAwareThroughputController(145): 017e9c207086433be1cb4738e64b5220#info#compaction#43 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-05 17:57:51,048 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/0f87198a7bb34620be1cfb382d745a2d as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/0f87198a7bb34620be1cfb382d745a2d 2023-06-05 17:57:51,055 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 017e9c207086433be1cb4738e64b5220/info of 017e9c207086433be1cb4738e64b5220 into 0f87198a7bb34620be1cfb382d745a2d(size=64.1 K), total size for store is 64.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-05 17:57:51,056 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:57:51,056 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220., storeName=017e9c207086433be1cb4738e64b5220/info, priority=13, startTime=1685987871019; duration=0sec 2023-06-05 17:57:51,056 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-05 17:57:53,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(9158): Flush requested on 017e9c207086433be1cb4738e64b5220 2023-06-05 17:57:53,004 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 017e9c207086433be1cb4738e64b5220 1/1 column families, dataSize=12.61 KB heapSize=13.75 KB 2023-06-05 17:57:53,030 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=12.61 KB at sequenceid=171 (bloomFilter=true), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/b62c9492311b4d3782c380d269502a88 2023-06-05 17:57:53,046 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/b62c9492311b4d3782c380d269502a88 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/b62c9492311b4d3782c380d269502a88 2023-06-05 17:57:53,050 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=017e9c207086433be1cb4738e64b5220, server=jenkins-hbase20.apache.org,39611,1685987823517 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-05 17:57:53,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] ipc.CallRunner(144): callId: 163 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:47704 deadline: 1685987883050, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=017e9c207086433be1cb4738e64b5220, server=jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:57:53,052 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/b62c9492311b4d3782c380d269502a88, entries=12, sequenceid=171, filesize=17.4 K 2023-06-05 17:57:53,053 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~12.61 KB/12912, heapSize ~13.73 KB/14064, currentSize=17.86 KB/18292 for 017e9c207086433be1cb4738e64b5220 in 49ms, sequenceid=171, compaction requested=false 2023-06-05 17:57:53,053 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:57:55,090 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-05 17:58:03,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(9158): Flush requested on 017e9c207086433be1cb4738e64b5220 2023-06-05 17:58:03,148 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 017e9c207086433be1cb4738e64b5220 1/1 column families, dataSize=18.91 KB heapSize=20.50 KB 2023-06-05 17:58:03,165 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=017e9c207086433be1cb4738e64b5220, server=jenkins-hbase20.apache.org,39611,1685987823517 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-05 17:58:03,165 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] ipc.CallRunner(144): callId: 177 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:47704 deadline: 1685987893164, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=017e9c207086433be1cb4738e64b5220, server=jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:58:03,167 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=18.91 KB at sequenceid=192 (bloomFilter=true), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/dabaf8208e2743dd854a4b0c1d800ad2 2023-06-05 17:58:03,173 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/dabaf8208e2743dd854a4b0c1d800ad2 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/dabaf8208e2743dd854a4b0c1d800ad2 2023-06-05 17:58:03,178 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/dabaf8208e2743dd854a4b0c1d800ad2, entries=18, sequenceid=192, filesize=23.7 K 2023-06-05 17:58:03,179 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~18.91 KB/19368, heapSize ~20.48 KB/20976, currentSize=11.56 KB/11836 for 017e9c207086433be1cb4738e64b5220 in 31ms, sequenceid=192, compaction requested=true 2023-06-05 17:58:03,179 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:58:03,180 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-05 17:58:03,180 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-05 17:58:03,181 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 107768 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-05 17:58:03,181 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1912): 017e9c207086433be1cb4738e64b5220/info is initiating minor compaction (all files) 2023-06-05 17:58:03,181 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 017e9c207086433be1cb4738e64b5220/info in TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220. 2023-06-05 17:58:03,181 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/0f87198a7bb34620be1cfb382d745a2d, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/b62c9492311b4d3782c380d269502a88, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/dabaf8208e2743dd854a4b0c1d800ad2] into tmpdir=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp, totalSize=105.2 K 2023-06-05 17:58:03,181 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting 0f87198a7bb34620be1cfb382d745a2d, keycount=56, bloomtype=ROW, size=64.1 K, encoding=NONE, compression=NONE, seqNum=155, earliestPutTs=1685987846744 2023-06-05 17:58:03,182 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting b62c9492311b4d3782c380d269502a88, keycount=12, bloomtype=ROW, size=17.4 K, encoding=NONE, compression=NONE, seqNum=171, earliestPutTs=1685987870986 2023-06-05 17:58:03,182 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting dabaf8208e2743dd854a4b0c1d800ad2, keycount=18, bloomtype=ROW, size=23.7 K, encoding=NONE, compression=NONE, seqNum=192, earliestPutTs=1685987873005 2023-06-05 17:58:03,193 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] throttle.PressureAwareThroughputController(145): 017e9c207086433be1cb4738e64b5220#info#compaction#46 average throughput is 44.12 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-05 17:58:03,205 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/d0ed34ce9a224d74a17ab0ed2659538b as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/d0ed34ce9a224d74a17ab0ed2659538b 2023-06-05 17:58:03,211 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 017e9c207086433be1cb4738e64b5220/info of 017e9c207086433be1cb4738e64b5220 into d0ed34ce9a224d74a17ab0ed2659538b(size=95.9 K), total size for store is 95.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-05 17:58:03,211 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:58:03,211 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220., storeName=017e9c207086433be1cb4738e64b5220/info, priority=13, startTime=1685987883180; duration=0sec 2023-06-05 17:58:03,211 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-05 17:58:13,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(9158): Flush requested on 017e9c207086433be1cb4738e64b5220 2023-06-05 17:58:13,238 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 017e9c207086433be1cb4738e64b5220 1/1 column families, dataSize=12.61 KB heapSize=13.75 KB 2023-06-05 17:58:13,249 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=12.61 KB at sequenceid=208 (bloomFilter=true), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/aada882c3d784dc9bfb45f61bd9af242 2023-06-05 17:58:13,257 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/aada882c3d784dc9bfb45f61bd9af242 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/aada882c3d784dc9bfb45f61bd9af242 2023-06-05 17:58:13,264 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/aada882c3d784dc9bfb45f61bd9af242, entries=12, sequenceid=208, filesize=17.4 K 2023-06-05 17:58:13,265 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~12.61 KB/12912, heapSize ~13.73 KB/14064, currentSize=1.05 KB/1076 for 017e9c207086433be1cb4738e64b5220 in 27ms, sequenceid=208, compaction requested=false 2023-06-05 17:58:13,266 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:58:15,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(9158): Flush requested on 017e9c207086433be1cb4738e64b5220 2023-06-05 17:58:15,254 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 017e9c207086433be1cb4738e64b5220 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-05 17:58:15,271 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=218 (bloomFilter=true), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/dde6d8e632fc4e519be2924ad468c6bc 2023-06-05 17:58:15,280 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/dde6d8e632fc4e519be2924ad468c6bc as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/dde6d8e632fc4e519be2924ad468c6bc 2023-06-05 17:58:15,287 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/dde6d8e632fc4e519be2924ad468c6bc, entries=7, sequenceid=218, filesize=12.1 K 2023-06-05 17:58:15,288 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=17.86 KB/18292 for 017e9c207086433be1cb4738e64b5220 in 34ms, sequenceid=218, compaction requested=true 2023-06-05 17:58:15,288 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:58:15,289 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-05 17:58:15,289 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-05 17:58:15,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(9158): Flush requested on 017e9c207086433be1cb4738e64b5220 2023-06-05 17:58:15,289 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 017e9c207086433be1cb4738e64b5220 1/1 column families, dataSize=18.91 KB heapSize=20.50 KB 2023-06-05 17:58:15,290 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 128409 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-05 17:58:15,290 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1912): 017e9c207086433be1cb4738e64b5220/info is initiating minor compaction (all files) 2023-06-05 17:58:15,290 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 017e9c207086433be1cb4738e64b5220/info in TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220. 2023-06-05 17:58:15,290 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/d0ed34ce9a224d74a17ab0ed2659538b, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/aada882c3d784dc9bfb45f61bd9af242, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/dde6d8e632fc4e519be2924ad468c6bc] into tmpdir=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp, totalSize=125.4 K 2023-06-05 17:58:15,291 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting d0ed34ce9a224d74a17ab0ed2659538b, keycount=86, bloomtype=ROW, size=95.9 K, encoding=NONE, compression=NONE, seqNum=192, earliestPutTs=1685987846744 2023-06-05 17:58:15,291 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting aada882c3d784dc9bfb45f61bd9af242, keycount=12, bloomtype=ROW, size=17.4 K, encoding=NONE, compression=NONE, seqNum=208, earliestPutTs=1685987883150 2023-06-05 17:58:15,292 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting dde6d8e632fc4e519be2924ad468c6bc, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=218, earliestPutTs=1685987893239 2023-06-05 17:58:15,301 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=18.91 KB at sequenceid=239 (bloomFilter=true), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/5056a0ab0c46495c8aa48ec5ad931772 2023-06-05 17:58:15,309 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/5056a0ab0c46495c8aa48ec5ad931772 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/5056a0ab0c46495c8aa48ec5ad931772 2023-06-05 17:58:15,309 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] throttle.PressureAwareThroughputController(145): 017e9c207086433be1cb4738e64b5220#info#compaction#50 average throughput is 53.87 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-05 17:58:15,316 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/5056a0ab0c46495c8aa48ec5ad931772, entries=18, sequenceid=239, filesize=23.7 K 2023-06-05 17:58:15,317 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~18.91 KB/19368, heapSize ~20.48 KB/20976, currentSize=8.41 KB/8608 for 017e9c207086433be1cb4738e64b5220 in 28ms, sequenceid=239, compaction requested=false 2023-06-05 17:58:15,317 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:58:15,322 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/df3a40de172649e0a317fb3ad8f19460 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/df3a40de172649e0a317fb3ad8f19460 2023-06-05 17:58:15,328 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 017e9c207086433be1cb4738e64b5220/info of 017e9c207086433be1cb4738e64b5220 into df3a40de172649e0a317fb3ad8f19460(size=116.0 K), total size for store is 139.7 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-05 17:58:15,328 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:58:15,328 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220., storeName=017e9c207086433be1cb4738e64b5220/info, priority=13, startTime=1685987895289; duration=0sec 2023-06-05 17:58:15,328 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-05 17:58:17,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(9158): Flush requested on 017e9c207086433be1cb4738e64b5220 2023-06-05 17:58:17,306 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 017e9c207086433be1cb4738e64b5220 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-06-05 17:58:17,371 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=252 (bloomFilter=true), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/cc6d4e6f12c24d549822657dcd864835 2023-06-05 17:58:17,377 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/cc6d4e6f12c24d549822657dcd864835 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/cc6d4e6f12c24d549822657dcd864835 2023-06-05 17:58:17,382 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=017e9c207086433be1cb4738e64b5220, server=jenkins-hbase20.apache.org,39611,1685987823517 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-05 17:58:17,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] ipc.CallRunner(144): callId: 234 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:47704 deadline: 1685987907382, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=017e9c207086433be1cb4738e64b5220, server=jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:58:17,383 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/cc6d4e6f12c24d549822657dcd864835, entries=9, sequenceid=252, filesize=14.2 K 2023-06-05 17:58:17,384 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=21.02 KB/21520 for 017e9c207086433be1cb4738e64b5220 in 78ms, sequenceid=252, compaction requested=true 2023-06-05 17:58:17,384 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:58:17,384 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-05 17:58:17,384 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-05 17:58:17,385 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 157641 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-05 17:58:17,385 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1912): 017e9c207086433be1cb4738e64b5220/info is initiating minor compaction (all files) 2023-06-05 17:58:17,385 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 017e9c207086433be1cb4738e64b5220/info in TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220. 2023-06-05 17:58:17,385 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/df3a40de172649e0a317fb3ad8f19460, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/5056a0ab0c46495c8aa48ec5ad931772, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/cc6d4e6f12c24d549822657dcd864835] into tmpdir=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp, totalSize=153.9 K 2023-06-05 17:58:17,386 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting df3a40de172649e0a317fb3ad8f19460, keycount=105, bloomtype=ROW, size=116.0 K, encoding=NONE, compression=NONE, seqNum=218, earliestPutTs=1685987846744 2023-06-05 17:58:17,386 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting 5056a0ab0c46495c8aa48ec5ad931772, keycount=18, bloomtype=ROW, size=23.7 K, encoding=NONE, compression=NONE, seqNum=239, earliestPutTs=1685987895255 2023-06-05 17:58:17,386 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting cc6d4e6f12c24d549822657dcd864835, keycount=9, bloomtype=ROW, size=14.2 K, encoding=NONE, compression=NONE, seqNum=252, earliestPutTs=1685987895290 2023-06-05 17:58:17,396 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] throttle.PressureAwareThroughputController(145): 017e9c207086433be1cb4738e64b5220#info#compaction#52 average throughput is 67.73 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-05 17:58:17,407 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/d0a3bd768eba4f25a8248bf59c62c61c as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/d0a3bd768eba4f25a8248bf59c62c61c 2023-06-05 17:58:17,413 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 017e9c207086433be1cb4738e64b5220/info of 017e9c207086433be1cb4738e64b5220 into d0a3bd768eba4f25a8248bf59c62c61c(size=144.7 K), total size for store is 144.7 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-05 17:58:17,413 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:58:17,413 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220., storeName=017e9c207086433be1cb4738e64b5220/info, priority=13, startTime=1685987897384; duration=0sec 2023-06-05 17:58:17,413 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-05 17:58:27,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(9158): Flush requested on 017e9c207086433be1cb4738e64b5220 2023-06-05 17:58:27,398 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 017e9c207086433be1cb4738e64b5220 1/1 column families, dataSize=22.07 KB heapSize=23.88 KB 2023-06-05 17:58:27,415 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.07 KB at sequenceid=277 (bloomFilter=true), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/277da25503554f5c847ab412ac7b3187 2023-06-05 17:58:27,417 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=017e9c207086433be1cb4738e64b5220, server=jenkins-hbase20.apache.org,39611,1685987823517 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-05 17:58:27,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] ipc.CallRunner(144): callId: 245 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:47704 deadline: 1685987917417, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=017e9c207086433be1cb4738e64b5220, server=jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:58:27,422 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/277da25503554f5c847ab412ac7b3187 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/277da25503554f5c847ab412ac7b3187 2023-06-05 17:58:27,428 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/277da25503554f5c847ab412ac7b3187, entries=21, sequenceid=277, filesize=26.9 K 2023-06-05 17:58:27,429 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~22.07 KB/22596, heapSize ~23.86 KB/24432, currentSize=8.41 KB/8608 for 017e9c207086433be1cb4738e64b5220 in 31ms, sequenceid=277, compaction requested=false 2023-06-05 17:58:27,429 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:58:37,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(9158): Flush requested on 017e9c207086433be1cb4738e64b5220 2023-06-05 17:58:37,518 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 017e9c207086433be1cb4738e64b5220 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-06-05 17:58:37,929 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=289 (bloomFilter=true), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/9db77012ca5d43cba1104f3ddb520065 2023-06-05 17:58:37,940 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/9db77012ca5d43cba1104f3ddb520065 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/9db77012ca5d43cba1104f3ddb520065 2023-06-05 17:58:37,948 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/9db77012ca5d43cba1104f3ddb520065, entries=9, sequenceid=289, filesize=14.2 K 2023-06-05 17:58:37,949 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=1.05 KB/1076 for 017e9c207086433be1cb4738e64b5220 in 431ms, sequenceid=289, compaction requested=true 2023-06-05 17:58:37,949 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:58:37,949 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-06-05 17:58:37,949 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-05 17:58:37,950 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 190316 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-05 17:58:37,950 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1912): 017e9c207086433be1cb4738e64b5220/info is initiating minor compaction (all files) 2023-06-05 17:58:37,951 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 017e9c207086433be1cb4738e64b5220/info in TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220. 2023-06-05 17:58:37,951 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/d0a3bd768eba4f25a8248bf59c62c61c, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/277da25503554f5c847ab412ac7b3187, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/9db77012ca5d43cba1104f3ddb520065] into tmpdir=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp, totalSize=185.9 K 2023-06-05 17:58:37,951 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting d0a3bd768eba4f25a8248bf59c62c61c, keycount=132, bloomtype=ROW, size=144.7 K, encoding=NONE, compression=NONE, seqNum=252, earliestPutTs=1685987846744 2023-06-05 17:58:37,951 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting 277da25503554f5c847ab412ac7b3187, keycount=21, bloomtype=ROW, size=26.9 K, encoding=NONE, compression=NONE, seqNum=277, earliestPutTs=1685987897307 2023-06-05 17:58:37,952 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting 9db77012ca5d43cba1104f3ddb520065, keycount=9, bloomtype=ROW, size=14.2 K, encoding=NONE, compression=NONE, seqNum=289, earliestPutTs=1685987907400 2023-06-05 17:58:37,967 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] throttle.PressureAwareThroughputController(145): 017e9c207086433be1cb4738e64b5220#info#compaction#55 average throughput is 55.41 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-05 17:58:37,981 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/982371d5ade5498b91cd48aa74b3796a as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/982371d5ade5498b91cd48aa74b3796a 2023-06-05 17:58:37,987 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 017e9c207086433be1cb4738e64b5220/info of 017e9c207086433be1cb4738e64b5220 into 982371d5ade5498b91cd48aa74b3796a(size=176.5 K), total size for store is 176.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-05 17:58:37,987 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:58:37,987 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220., storeName=017e9c207086433be1cb4738e64b5220/info, priority=13, startTime=1685987917949; duration=0sec 2023-06-05 17:58:37,988 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-05 17:58:39,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(9158): Flush requested on 017e9c207086433be1cb4738e64b5220 2023-06-05 17:58:39,537 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 017e9c207086433be1cb4738e64b5220 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-05 17:58:39,548 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=300 (bloomFilter=true), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/81d546fdc54e4bd19e5029664fb86af6 2023-06-05 17:58:39,553 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/81d546fdc54e4bd19e5029664fb86af6 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/81d546fdc54e4bd19e5029664fb86af6 2023-06-05 17:58:39,558 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/81d546fdc54e4bd19e5029664fb86af6, entries=7, sequenceid=300, filesize=12.1 K 2023-06-05 17:58:39,559 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=18.91 KB/19368 for 017e9c207086433be1cb4738e64b5220 in 22ms, sequenceid=300, compaction requested=false 2023-06-05 17:58:39,559 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:58:39,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39611] regionserver.HRegion(9158): Flush requested on 017e9c207086433be1cb4738e64b5220 2023-06-05 17:58:39,560 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 017e9c207086433be1cb4738e64b5220 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-06-05 17:58:39,569 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=323 (bloomFilter=true), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/c97be428803e4fe2a529e51196adfa08 2023-06-05 17:58:39,573 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/c97be428803e4fe2a529e51196adfa08 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/c97be428803e4fe2a529e51196adfa08 2023-06-05 17:58:39,578 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/c97be428803e4fe2a529e51196adfa08, entries=20, sequenceid=323, filesize=25.8 K 2023-06-05 17:58:39,579 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=6.30 KB/6456 for 017e9c207086433be1cb4738e64b5220 in 19ms, sequenceid=323, compaction requested=true 2023-06-05 17:58:39,579 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:58:39,579 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-06-05 17:58:39,579 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-05 17:58:39,581 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 219559 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-05 17:58:39,581 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1912): 017e9c207086433be1cb4738e64b5220/info is initiating minor compaction (all files) 2023-06-05 17:58:39,581 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 017e9c207086433be1cb4738e64b5220/info in TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220. 2023-06-05 17:58:39,581 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/982371d5ade5498b91cd48aa74b3796a, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/81d546fdc54e4bd19e5029664fb86af6, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/c97be428803e4fe2a529e51196adfa08] into tmpdir=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp, totalSize=214.4 K 2023-06-05 17:58:39,581 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting 982371d5ade5498b91cd48aa74b3796a, keycount=162, bloomtype=ROW, size=176.5 K, encoding=NONE, compression=NONE, seqNum=289, earliestPutTs=1685987846744 2023-06-05 17:58:39,582 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting 81d546fdc54e4bd19e5029664fb86af6, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=300, earliestPutTs=1685987917519 2023-06-05 17:58:39,582 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] compactions.Compactor(207): Compacting c97be428803e4fe2a529e51196adfa08, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=323, earliestPutTs=1685987919538 2023-06-05 17:58:39,595 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] throttle.PressureAwareThroughputController(145): 017e9c207086433be1cb4738e64b5220#info#compaction#58 average throughput is 96.97 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-05 17:58:39,611 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/fac19af7f82d46b6964ee3135d0f202f as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/fac19af7f82d46b6964ee3135d0f202f 2023-06-05 17:58:39,617 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 017e9c207086433be1cb4738e64b5220/info of 017e9c207086433be1cb4738e64b5220 into fac19af7f82d46b6964ee3135d0f202f(size=205.1 K), total size for store is 205.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-05 17:58:39,617 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:58:39,617 INFO [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220., storeName=017e9c207086433be1cb4738e64b5220/info, priority=13, startTime=1685987919579; duration=0sec 2023-06-05 17:58:39,617 DEBUG [RS:0;jenkins-hbase20:39611-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-05 17:58:41,566 INFO [Listener at localhost.localdomain/37565] wal.AbstractTestLogRolling(188): after writing there are 0 log files 2023-06-05 17:58:41,598 INFO [Listener at localhost.localdomain/37565] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/WALs/jenkins-hbase20.apache.org,39611,1685987823517/jenkins-hbase20.apache.org%2C39611%2C1685987823517.1685987823907 with entries=314, filesize=308.54 KB; new WAL /user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/WALs/jenkins-hbase20.apache.org,39611,1685987823517/jenkins-hbase20.apache.org%2C39611%2C1685987823517.1685987921567 2023-06-05 17:58:41,599 DEBUG [Listener at localhost.localdomain/37565] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36145,DS-28044e8d-fcd6-452c-99ab-f3a807e211b4,DISK], DatanodeInfoWithStorage[127.0.0.1:45559,DS-e636ed62-dc1c-467a-aec2-951d91a5fcb2,DISK]] 2023-06-05 17:58:41,599 DEBUG [Listener at localhost.localdomain/37565] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/WALs/jenkins-hbase20.apache.org,39611,1685987823517/jenkins-hbase20.apache.org%2C39611%2C1685987823517.1685987823907 is not closed yet, will try archiving it next time 2023-06-05 17:58:41,607 DEBUG [Listener at localhost.localdomain/37565] regionserver.HRegion(2446): Flush status journal for dbc39d0b055e39c976b30dd415068324: 2023-06-05 17:58:41,607 INFO [Listener at localhost.localdomain/37565] regionserver.HRegion(2745): Flushing 017e9c207086433be1cb4738e64b5220 1/1 column families, dataSize=6.30 KB heapSize=7 KB 2023-06-05 17:58:41,617 INFO [Listener at localhost.localdomain/37565] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.30 KB at sequenceid=333 (bloomFilter=true), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/a99195d5536e47a893fef0e2a720ca95 2023-06-05 17:58:41,622 DEBUG [Listener at localhost.localdomain/37565] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/.tmp/info/a99195d5536e47a893fef0e2a720ca95 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/a99195d5536e47a893fef0e2a720ca95 2023-06-05 17:58:41,627 INFO [Listener at localhost.localdomain/37565] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/a99195d5536e47a893fef0e2a720ca95, entries=6, sequenceid=333, filesize=11.1 K 2023-06-05 17:58:41,628 INFO [Listener at localhost.localdomain/37565] regionserver.HRegion(2948): Finished flush of dataSize ~6.30 KB/6456, heapSize ~6.98 KB/7152, currentSize=0 B/0 for 017e9c207086433be1cb4738e64b5220 in 21ms, sequenceid=333, compaction requested=false 2023-06-05 17:58:41,628 DEBUG [Listener at localhost.localdomain/37565] regionserver.HRegion(2446): Flush status journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:58:41,629 INFO [Listener at localhost.localdomain/37565] regionserver.HRegion(2745): Flushing d00dca3e6d91e9991f664a638a9a9405 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-05 17:58:41,640 INFO [Listener at localhost.localdomain/37565] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/namespace/d00dca3e6d91e9991f664a638a9a9405/.tmp/info/a492bdbc37fd4a15b94e124d79406403 2023-06-05 17:58:41,646 DEBUG [Listener at localhost.localdomain/37565] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/namespace/d00dca3e6d91e9991f664a638a9a9405/.tmp/info/a492bdbc37fd4a15b94e124d79406403 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/namespace/d00dca3e6d91e9991f664a638a9a9405/info/a492bdbc37fd4a15b94e124d79406403 2023-06-05 17:58:41,651 INFO [Listener at localhost.localdomain/37565] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/namespace/d00dca3e6d91e9991f664a638a9a9405/info/a492bdbc37fd4a15b94e124d79406403, entries=2, sequenceid=6, filesize=4.8 K 2023-06-05 17:58:41,652 INFO [Listener at localhost.localdomain/37565] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for d00dca3e6d91e9991f664a638a9a9405 in 23ms, sequenceid=6, compaction requested=false 2023-06-05 17:58:41,653 DEBUG [Listener at localhost.localdomain/37565] regionserver.HRegion(2446): Flush status journal for d00dca3e6d91e9991f664a638a9a9405: 2023-06-05 17:58:41,653 INFO [Listener at localhost.localdomain/37565] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.26 KB heapSize=4.19 KB 2023-06-05 17:58:41,665 INFO [Listener at localhost.localdomain/37565] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.26 KB at sequenceid=24 (bloomFilter=false), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740/.tmp/info/5a6fdf2ee6344518bd4f83ab7c43224f 2023-06-05 17:58:41,671 DEBUG [Listener at localhost.localdomain/37565] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740/.tmp/info/5a6fdf2ee6344518bd4f83ab7c43224f as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740/info/5a6fdf2ee6344518bd4f83ab7c43224f 2023-06-05 17:58:41,677 INFO [Listener at localhost.localdomain/37565] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740/info/5a6fdf2ee6344518bd4f83ab7c43224f, entries=16, sequenceid=24, filesize=7.0 K 2023-06-05 17:58:41,678 INFO [Listener at localhost.localdomain/37565] regionserver.HRegion(2948): Finished flush of dataSize ~2.26 KB/2316, heapSize ~3.67 KB/3760, currentSize=0 B/0 for 1588230740 in 25ms, sequenceid=24, compaction requested=false 2023-06-05 17:58:41,678 DEBUG [Listener at localhost.localdomain/37565] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-06-05 17:58:41,688 INFO [Listener at localhost.localdomain/37565] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/WALs/jenkins-hbase20.apache.org,39611,1685987823517/jenkins-hbase20.apache.org%2C39611%2C1685987823517.1685987921567 with entries=4, filesize=1.22 KB; new WAL /user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/WALs/jenkins-hbase20.apache.org,39611,1685987823517/jenkins-hbase20.apache.org%2C39611%2C1685987823517.1685987921678 2023-06-05 17:58:41,688 DEBUG [Listener at localhost.localdomain/37565] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45559,DS-e636ed62-dc1c-467a-aec2-951d91a5fcb2,DISK], DatanodeInfoWithStorage[127.0.0.1:36145,DS-28044e8d-fcd6-452c-99ab-f3a807e211b4,DISK]] 2023-06-05 17:58:41,688 DEBUG [Listener at localhost.localdomain/37565] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/WALs/jenkins-hbase20.apache.org,39611,1685987823517/jenkins-hbase20.apache.org%2C39611%2C1685987823517.1685987921567 is not closed yet, will try archiving it next time 2023-06-05 17:58:41,688 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/WALs/jenkins-hbase20.apache.org,39611,1685987823517/jenkins-hbase20.apache.org%2C39611%2C1685987823517.1685987823907 to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/oldWALs/jenkins-hbase20.apache.org%2C39611%2C1685987823517.1685987823907 2023-06-05 17:58:41,691 INFO [Listener at localhost.localdomain/37565] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-06-05 17:58:41,694 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/WALs/jenkins-hbase20.apache.org,39611,1685987823517/jenkins-hbase20.apache.org%2C39611%2C1685987823517.1685987921567 to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/oldWALs/jenkins-hbase20.apache.org%2C39611%2C1685987823517.1685987921567 2023-06-05 17:58:41,791 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-05 17:58:41,792 INFO [Listener at localhost.localdomain/37565] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-06-05 17:58:41,792 DEBUG [Listener at localhost.localdomain/37565] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x27dea06c to 127.0.0.1:57589 2023-06-05 17:58:41,792 DEBUG [Listener at localhost.localdomain/37565] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:58:41,792 DEBUG [Listener at localhost.localdomain/37565] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-05 17:58:41,792 DEBUG [Listener at localhost.localdomain/37565] util.JVMClusterUtil(257): Found active master hash=252327101, stopped=false 2023-06-05 17:58:41,792 INFO [Listener at localhost.localdomain/37565] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,36181,1685987823469 2023-06-05 17:58:41,795 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-05 17:58:41,795 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): regionserver:39611-0x101bc6aa7820001, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-05 17:58:41,795 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:58:41,795 INFO [Listener at localhost.localdomain/37565] procedure2.ProcedureExecutor(629): Stopping 2023-06-05 17:58:41,796 DEBUG [Listener at localhost.localdomain/37565] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x01e9be4a to 127.0.0.1:57589 2023-06-05 17:58:41,798 DEBUG [Listener at localhost.localdomain/37565] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:58:41,798 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:58:41,798 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39611-0x101bc6aa7820001, quorum=127.0.0.1:57589, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:58:41,798 INFO [Listener at localhost.localdomain/37565] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,39611,1685987823517' ***** 2023-06-05 17:58:41,799 INFO [Listener at localhost.localdomain/37565] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-05 17:58:41,799 INFO [RS:0;jenkins-hbase20:39611] regionserver.HeapMemoryManager(220): Stopping 2023-06-05 17:58:41,799 INFO [RS:0;jenkins-hbase20:39611] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-05 17:58:41,799 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-05 17:58:41,800 INFO [RS:0;jenkins-hbase20:39611] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-05 17:58:41,800 INFO [RS:0;jenkins-hbase20:39611] regionserver.HRegionServer(3303): Received CLOSE for dbc39d0b055e39c976b30dd415068324 2023-06-05 17:58:41,800 INFO [RS:0;jenkins-hbase20:39611] regionserver.HRegionServer(3303): Received CLOSE for 017e9c207086433be1cb4738e64b5220 2023-06-05 17:58:41,800 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing dbc39d0b055e39c976b30dd415068324, disabling compactions & flushes 2023-06-05 17:58:41,800 INFO [RS:0;jenkins-hbase20:39611] regionserver.HRegionServer(3303): Received CLOSE for d00dca3e6d91e9991f664a638a9a9405 2023-06-05 17:58:41,800 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685987856855.dbc39d0b055e39c976b30dd415068324. 2023-06-05 17:58:41,801 INFO [RS:0;jenkins-hbase20:39611] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:58:41,801 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685987856855.dbc39d0b055e39c976b30dd415068324. 2023-06-05 17:58:41,801 DEBUG [RS:0;jenkins-hbase20:39611] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6c5a9742 to 127.0.0.1:57589 2023-06-05 17:58:41,801 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685987856855.dbc39d0b055e39c976b30dd415068324. after waiting 0 ms 2023-06-05 17:58:41,801 DEBUG [RS:0;jenkins-hbase20:39611] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:58:41,801 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685987856855.dbc39d0b055e39c976b30dd415068324. 2023-06-05 17:58:41,801 INFO [RS:0;jenkins-hbase20:39611] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-05 17:58:41,802 INFO [RS:0;jenkins-hbase20:39611] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-05 17:58:41,802 INFO [RS:0;jenkins-hbase20:39611] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-05 17:58:41,802 INFO [RS:0;jenkins-hbase20:39611] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-05 17:58:41,802 INFO [RS:0;jenkins-hbase20:39611] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-06-05 17:58:41,802 DEBUG [RS:0;jenkins-hbase20:39611] regionserver.HRegionServer(1478): Online Regions={dbc39d0b055e39c976b30dd415068324=TestLogRolling-testLogRolling,,1685987856855.dbc39d0b055e39c976b30dd415068324., 017e9c207086433be1cb4738e64b5220=TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220., d00dca3e6d91e9991f664a638a9a9405=hbase:namespace,,1685987824069.d00dca3e6d91e9991f664a638a9a9405., 1588230740=hbase:meta,,1.1588230740} 2023-06-05 17:58:41,804 DEBUG [RS:0;jenkins-hbase20:39611] regionserver.HRegionServer(1504): Waiting on 017e9c207086433be1cb4738e64b5220, 1588230740, d00dca3e6d91e9991f664a638a9a9405, dbc39d0b055e39c976b30dd415068324 2023-06-05 17:58:41,804 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-05 17:58:41,804 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685987856855.dbc39d0b055e39c976b30dd415068324.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/dbc39d0b055e39c976b30dd415068324/info/cb14a5b0059b47d984f322a45eef8b26.0ad739c8fb732e5fb54552adeb450615->hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/cb14a5b0059b47d984f322a45eef8b26-bottom] to archive 2023-06-05 17:58:41,804 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-05 17:58:41,804 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-05 17:58:41,804 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-05 17:58:41,804 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-05 17:58:41,807 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685987856855.dbc39d0b055e39c976b30dd415068324.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-05 17:58:41,812 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685987856855.dbc39d0b055e39c976b30dd415068324.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/dbc39d0b055e39c976b30dd415068324/info/cb14a5b0059b47d984f322a45eef8b26.0ad739c8fb732e5fb54552adeb450615 to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/dbc39d0b055e39c976b30dd415068324/info/cb14a5b0059b47d984f322a45eef8b26.0ad739c8fb732e5fb54552adeb450615 2023-06-05 17:58:41,815 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/meta/1588230740/recovered.edits/27.seqid, newMaxSeqId=27, maxSeqId=1 2023-06-05 17:58:41,817 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-05 17:58:41,818 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-05 17:58:41,818 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-05 17:58:41,818 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-05 17:58:41,821 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/dbc39d0b055e39c976b30dd415068324/recovered.edits/93.seqid, newMaxSeqId=93, maxSeqId=88 2023-06-05 17:58:41,823 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685987856855.dbc39d0b055e39c976b30dd415068324. 2023-06-05 17:58:41,823 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for dbc39d0b055e39c976b30dd415068324: 2023-06-05 17:58:41,823 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,,1685987856855.dbc39d0b055e39c976b30dd415068324. 2023-06-05 17:58:41,823 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 017e9c207086433be1cb4738e64b5220, disabling compactions & flushes 2023-06-05 17:58:41,824 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220. 2023-06-05 17:58:41,824 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220. 2023-06-05 17:58:41,824 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220. after waiting 0 ms 2023-06-05 17:58:41,824 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220. 2023-06-05 17:58:41,836 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/cb14a5b0059b47d984f322a45eef8b26.0ad739c8fb732e5fb54552adeb450615->hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/0ad739c8fb732e5fb54552adeb450615/info/cb14a5b0059b47d984f322a45eef8b26-top, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/e4136a68e3814955b6acd771691d5e67, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/TestLogRolling-testLogRolling=0ad739c8fb732e5fb54552adeb450615-d7beb01b882d4f72a44a2d1fcb8d6852, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/6f7d4ecfed4c4772a8b15366ae336853, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/d8eac2da16964e91ae28c1c2cfc149a1, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/1f7e84751e184ba9bfa887d5a1dfb0a0, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/0cbccfc8d146452cbe6c21485f688bfc, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/0f87198a7bb34620be1cfb382d745a2d, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/1bb9f5c320ba48208f3819e09619d73d, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/b62c9492311b4d3782c380d269502a88, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/d0ed34ce9a224d74a17ab0ed2659538b, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/dabaf8208e2743dd854a4b0c1d800ad2, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/aada882c3d784dc9bfb45f61bd9af242, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/df3a40de172649e0a317fb3ad8f19460, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/dde6d8e632fc4e519be2924ad468c6bc, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/5056a0ab0c46495c8aa48ec5ad931772, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/d0a3bd768eba4f25a8248bf59c62c61c, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/cc6d4e6f12c24d549822657dcd864835, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/277da25503554f5c847ab412ac7b3187, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/982371d5ade5498b91cd48aa74b3796a, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/9db77012ca5d43cba1104f3ddb520065, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/81d546fdc54e4bd19e5029664fb86af6, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/c97be428803e4fe2a529e51196adfa08] to archive 2023-06-05 17:58:41,836 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-05 17:58:41,838 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/cb14a5b0059b47d984f322a45eef8b26.0ad739c8fb732e5fb54552adeb450615 to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/cb14a5b0059b47d984f322a45eef8b26.0ad739c8fb732e5fb54552adeb450615 2023-06-05 17:58:41,839 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/e4136a68e3814955b6acd771691d5e67 to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/e4136a68e3814955b6acd771691d5e67 2023-06-05 17:58:41,840 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/TestLogRolling-testLogRolling=0ad739c8fb732e5fb54552adeb450615-d7beb01b882d4f72a44a2d1fcb8d6852 to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/TestLogRolling-testLogRolling=0ad739c8fb732e5fb54552adeb450615-d7beb01b882d4f72a44a2d1fcb8d6852 2023-06-05 17:58:41,841 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/6f7d4ecfed4c4772a8b15366ae336853 to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/6f7d4ecfed4c4772a8b15366ae336853 2023-06-05 17:58:41,843 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/d8eac2da16964e91ae28c1c2cfc149a1 to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/d8eac2da16964e91ae28c1c2cfc149a1 2023-06-05 17:58:41,844 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/1f7e84751e184ba9bfa887d5a1dfb0a0 to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/1f7e84751e184ba9bfa887d5a1dfb0a0 2023-06-05 17:58:41,845 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/0cbccfc8d146452cbe6c21485f688bfc to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/0cbccfc8d146452cbe6c21485f688bfc 2023-06-05 17:58:41,846 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/0f87198a7bb34620be1cfb382d745a2d to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/0f87198a7bb34620be1cfb382d745a2d 2023-06-05 17:58:41,847 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/1bb9f5c320ba48208f3819e09619d73d to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/1bb9f5c320ba48208f3819e09619d73d 2023-06-05 17:58:41,848 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/b62c9492311b4d3782c380d269502a88 to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/b62c9492311b4d3782c380d269502a88 2023-06-05 17:58:41,849 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/d0ed34ce9a224d74a17ab0ed2659538b to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/d0ed34ce9a224d74a17ab0ed2659538b 2023-06-05 17:58:41,849 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/dabaf8208e2743dd854a4b0c1d800ad2 to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/dabaf8208e2743dd854a4b0c1d800ad2 2023-06-05 17:58:41,850 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/aada882c3d784dc9bfb45f61bd9af242 to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/aada882c3d784dc9bfb45f61bd9af242 2023-06-05 17:58:41,851 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/df3a40de172649e0a317fb3ad8f19460 to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/df3a40de172649e0a317fb3ad8f19460 2023-06-05 17:58:41,852 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/dde6d8e632fc4e519be2924ad468c6bc to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/dde6d8e632fc4e519be2924ad468c6bc 2023-06-05 17:58:41,853 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/5056a0ab0c46495c8aa48ec5ad931772 to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/5056a0ab0c46495c8aa48ec5ad931772 2023-06-05 17:58:41,854 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/d0a3bd768eba4f25a8248bf59c62c61c to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/d0a3bd768eba4f25a8248bf59c62c61c 2023-06-05 17:58:41,854 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/cc6d4e6f12c24d549822657dcd864835 to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/cc6d4e6f12c24d549822657dcd864835 2023-06-05 17:58:41,855 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/277da25503554f5c847ab412ac7b3187 to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/277da25503554f5c847ab412ac7b3187 2023-06-05 17:58:41,856 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/982371d5ade5498b91cd48aa74b3796a to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/982371d5ade5498b91cd48aa74b3796a 2023-06-05 17:58:41,857 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/9db77012ca5d43cba1104f3ddb520065 to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/9db77012ca5d43cba1104f3ddb520065 2023-06-05 17:58:41,858 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/81d546fdc54e4bd19e5029664fb86af6 to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/81d546fdc54e4bd19e5029664fb86af6 2023-06-05 17:58:41,859 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/c97be428803e4fe2a529e51196adfa08 to hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/archive/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/info/c97be428803e4fe2a529e51196adfa08 2023-06-05 17:58:41,863 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/default/TestLogRolling-testLogRolling/017e9c207086433be1cb4738e64b5220/recovered.edits/336.seqid, newMaxSeqId=336, maxSeqId=88 2023-06-05 17:58:41,865 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220. 2023-06-05 17:58:41,865 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 017e9c207086433be1cb4738e64b5220: 2023-06-05 17:58:41,865 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,row0062,1685987856855.017e9c207086433be1cb4738e64b5220. 2023-06-05 17:58:41,865 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing d00dca3e6d91e9991f664a638a9a9405, disabling compactions & flushes 2023-06-05 17:58:41,865 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685987824069.d00dca3e6d91e9991f664a638a9a9405. 2023-06-05 17:58:41,865 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685987824069.d00dca3e6d91e9991f664a638a9a9405. 2023-06-05 17:58:41,865 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685987824069.d00dca3e6d91e9991f664a638a9a9405. after waiting 0 ms 2023-06-05 17:58:41,865 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685987824069.d00dca3e6d91e9991f664a638a9a9405. 2023-06-05 17:58:41,869 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/data/hbase/namespace/d00dca3e6d91e9991f664a638a9a9405/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-05 17:58:41,870 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685987824069.d00dca3e6d91e9991f664a638a9a9405. 2023-06-05 17:58:41,870 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for d00dca3e6d91e9991f664a638a9a9405: 2023-06-05 17:58:41,870 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685987824069.d00dca3e6d91e9991f664a638a9a9405. 2023-06-05 17:58:42,004 INFO [RS:0;jenkins-hbase20:39611] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,39611,1685987823517; all regions closed. 2023-06-05 17:58:42,004 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/WALs/jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:58:42,009 DEBUG [RS:0;jenkins-hbase20:39611] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/oldWALs 2023-06-05 17:58:42,009 INFO [RS:0;jenkins-hbase20:39611] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C39611%2C1685987823517.meta:.meta(num 1685987824013) 2023-06-05 17:58:42,009 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/WALs/jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:58:42,013 DEBUG [RS:0;jenkins-hbase20:39611] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/oldWALs 2023-06-05 17:58:42,013 INFO [RS:0;jenkins-hbase20:39611] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C39611%2C1685987823517:(num 1685987921678) 2023-06-05 17:58:42,013 DEBUG [RS:0;jenkins-hbase20:39611] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:58:42,013 INFO [RS:0;jenkins-hbase20:39611] regionserver.LeaseManager(133): Closed leases 2023-06-05 17:58:42,014 INFO [RS:0;jenkins-hbase20:39611] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-06-05 17:58:42,014 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-05 17:58:42,014 INFO [RS:0;jenkins-hbase20:39611] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:39611 2023-06-05 17:58:42,016 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-05 17:58:42,016 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): regionserver:39611-0x101bc6aa7820001, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,39611,1685987823517 2023-06-05 17:58:42,016 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): regionserver:39611-0x101bc6aa7820001, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-05 17:58:42,017 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,39611,1685987823517] 2023-06-05 17:58:42,017 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,39611,1685987823517; numProcessing=1 2023-06-05 17:58:42,018 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,39611,1685987823517 already deleted, retry=false 2023-06-05 17:58:42,018 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,39611,1685987823517 expired; onlineServers=0 2023-06-05 17:58:42,018 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,36181,1685987823469' ***** 2023-06-05 17:58:42,018 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-05 17:58:42,018 DEBUG [M:0;jenkins-hbase20:36181] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@b35a184, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-05 17:58:42,019 INFO [M:0;jenkins-hbase20:36181] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,36181,1685987823469 2023-06-05 17:58:42,019 INFO [M:0;jenkins-hbase20:36181] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,36181,1685987823469; all regions closed. 2023-06-05 17:58:42,019 DEBUG [M:0;jenkins-hbase20:36181] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:58:42,019 DEBUG [M:0;jenkins-hbase20:36181] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-05 17:58:42,019 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-05 17:58:42,019 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685987823647] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685987823647,5,FailOnTimeoutGroup] 2023-06-05 17:58:42,019 DEBUG [M:0;jenkins-hbase20:36181] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-05 17:58:42,019 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685987823647] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685987823647,5,FailOnTimeoutGroup] 2023-06-05 17:58:42,020 INFO [M:0;jenkins-hbase20:36181] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-05 17:58:42,020 INFO [M:0;jenkins-hbase20:36181] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-05 17:58:42,020 INFO [M:0;jenkins-hbase20:36181] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-06-05 17:58:42,021 DEBUG [M:0;jenkins-hbase20:36181] master.HMaster(1512): Stopping service threads 2023-06-05 17:58:42,021 INFO [M:0;jenkins-hbase20:36181] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-05 17:58:42,021 ERROR [M:0;jenkins-hbase20:36181] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-05 17:58:42,021 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-05 17:58:42,021 INFO [M:0;jenkins-hbase20:36181] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-05 17:58:42,021 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:58:42,021 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-05 17:58:42,022 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-05 17:58:42,022 DEBUG [M:0;jenkins-hbase20:36181] zookeeper.ZKUtil(398): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-05 17:58:42,022 WARN [M:0;jenkins-hbase20:36181] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-05 17:58:42,022 INFO [M:0;jenkins-hbase20:36181] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-05 17:58:42,022 INFO [M:0;jenkins-hbase20:36181] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-05 17:58:42,022 DEBUG [M:0;jenkins-hbase20:36181] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-05 17:58:42,023 INFO [M:0;jenkins-hbase20:36181] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:58:42,023 DEBUG [M:0;jenkins-hbase20:36181] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:58:42,023 DEBUG [M:0;jenkins-hbase20:36181] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-05 17:58:42,023 DEBUG [M:0;jenkins-hbase20:36181] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:58:42,023 INFO [M:0;jenkins-hbase20:36181] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=64.78 KB heapSize=78.52 KB 2023-06-05 17:58:42,034 INFO [M:0;jenkins-hbase20:36181] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=64.78 KB at sequenceid=160 (bloomFilter=true), to=hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/839ebf0f7f4143c6bd56a464dcc626c9 2023-06-05 17:58:42,039 INFO [M:0;jenkins-hbase20:36181] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 839ebf0f7f4143c6bd56a464dcc626c9 2023-06-05 17:58:42,041 DEBUG [M:0;jenkins-hbase20:36181] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/839ebf0f7f4143c6bd56a464dcc626c9 as hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/839ebf0f7f4143c6bd56a464dcc626c9 2023-06-05 17:58:42,046 INFO [M:0;jenkins-hbase20:36181] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 839ebf0f7f4143c6bd56a464dcc626c9 2023-06-05 17:58:42,047 INFO [M:0;jenkins-hbase20:36181] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43409/user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/839ebf0f7f4143c6bd56a464dcc626c9, entries=18, sequenceid=160, filesize=6.9 K 2023-06-05 17:58:42,047 INFO [M:0;jenkins-hbase20:36181] regionserver.HRegion(2948): Finished flush of dataSize ~64.78 KB/66332, heapSize ~78.51 KB/80392, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=160, compaction requested=false 2023-06-05 17:58:42,052 INFO [M:0;jenkins-hbase20:36181] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:58:42,052 DEBUG [M:0;jenkins-hbase20:36181] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-05 17:58:42,053 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/e9e8423d-7348-d085-3cce-4281c13c2f3e/MasterData/WALs/jenkins-hbase20.apache.org,36181,1685987823469 2023-06-05 17:58:42,057 INFO [M:0;jenkins-hbase20:36181] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-05 17:58:42,057 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-05 17:58:42,057 INFO [M:0;jenkins-hbase20:36181] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:36181 2023-06-05 17:58:42,060 DEBUG [M:0;jenkins-hbase20:36181] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,36181,1685987823469 already deleted, retry=false 2023-06-05 17:58:42,117 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): regionserver:39611-0x101bc6aa7820001, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:58:42,117 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): regionserver:39611-0x101bc6aa7820001, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:58:42,117 INFO [RS:0;jenkins-hbase20:39611] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,39611,1685987823517; zookeeper connection closed. 2023-06-05 17:58:42,118 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@882f539] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@882f539 2023-06-05 17:58:42,118 INFO [Listener at localhost.localdomain/37565] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-05 17:58:42,217 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:58:42,217 DEBUG [Listener at localhost.localdomain/37565-EventThread] zookeeper.ZKWatcher(600): master:36181-0x101bc6aa7820000, quorum=127.0.0.1:57589, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:58:42,217 INFO [M:0;jenkins-hbase20:36181] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,36181,1685987823469; zookeeper connection closed. 2023-06-05 17:58:42,219 WARN [Listener at localhost.localdomain/37565] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-05 17:58:42,228 INFO [Listener at localhost.localdomain/37565] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-05 17:58:42,339 WARN [BP-1522619669-148.251.75.209-1685987822947 heartbeating to localhost.localdomain/127.0.0.1:43409] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-05 17:58:42,339 WARN [BP-1522619669-148.251.75.209-1685987822947 heartbeating to localhost.localdomain/127.0.0.1:43409] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1522619669-148.251.75.209-1685987822947 (Datanode Uuid 640cdc3f-58c8-4b88-9c8d-b7f9cc025907) service to localhost.localdomain/127.0.0.1:43409 2023-06-05 17:58:42,341 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/cluster_766340aa-64db-8fb6-cbe2-12d9de74b5da/dfs/data/data3/current/BP-1522619669-148.251.75.209-1685987822947] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:58:42,341 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/cluster_766340aa-64db-8fb6-cbe2-12d9de74b5da/dfs/data/data4/current/BP-1522619669-148.251.75.209-1685987822947] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:58:42,343 WARN [Listener at localhost.localdomain/37565] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-05 17:58:42,347 INFO [Listener at localhost.localdomain/37565] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-05 17:58:42,458 WARN [BP-1522619669-148.251.75.209-1685987822947 heartbeating to localhost.localdomain/127.0.0.1:43409] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-05 17:58:42,458 WARN [BP-1522619669-148.251.75.209-1685987822947 heartbeating to localhost.localdomain/127.0.0.1:43409] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1522619669-148.251.75.209-1685987822947 (Datanode Uuid 0e1f1bcf-4401-4a5b-9ca1-be7168946673) service to localhost.localdomain/127.0.0.1:43409 2023-06-05 17:58:42,459 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/cluster_766340aa-64db-8fb6-cbe2-12d9de74b5da/dfs/data/data1/current/BP-1522619669-148.251.75.209-1685987822947] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:58:42,459 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/cluster_766340aa-64db-8fb6-cbe2-12d9de74b5da/dfs/data/data2/current/BP-1522619669-148.251.75.209-1685987822947] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:58:42,478 INFO [Listener at localhost.localdomain/37565] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-06-05 17:58:42,593 INFO [Listener at localhost.localdomain/37565] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-05 17:58:42,626 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-05 17:58:42,635 INFO [Listener at localhost.localdomain/37565] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRolling Thread=108 (was 96) - Thread LEAK? -, OpenFileDescriptor=545 (was 498) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=38 (was 95), ProcessCount=167 (was 167), AvailableMemoryMB=6088 (was 6787) 2023-06-05 17:58:42,645 INFO [Listener at localhost.localdomain/37565] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=108, OpenFileDescriptor=545, MaxFileDescriptor=60000, SystemLoadAverage=38, ProcessCount=167, AvailableMemoryMB=6089 2023-06-05 17:58:42,645 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-05 17:58:42,645 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/hadoop.log.dir so I do NOT create it in target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852 2023-06-05 17:58:42,645 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e25e124e-6f75-fcbf-ee56-c4e21ba01d8c/hadoop.tmp.dir so I do NOT create it in target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852 2023-06-05 17:58:42,645 INFO [Listener at localhost.localdomain/37565] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/cluster_bb7d33f9-3d4a-6e5f-8585-c7802830a80d, deleteOnExit=true 2023-06-05 17:58:42,645 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-05 17:58:42,645 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/test.cache.data in system properties and HBase conf 2023-06-05 17:58:42,646 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/hadoop.tmp.dir in system properties and HBase conf 2023-06-05 17:58:42,646 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/hadoop.log.dir in system properties and HBase conf 2023-06-05 17:58:42,646 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-05 17:58:42,646 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-05 17:58:42,646 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-05 17:58:42,646 DEBUG [Listener at localhost.localdomain/37565] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-05 17:58:42,646 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-05 17:58:42,646 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-05 17:58:42,647 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-05 17:58:42,647 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-05 17:58:42,647 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-05 17:58:42,647 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-05 17:58:42,647 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-05 17:58:42,647 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-05 17:58:42,647 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-05 17:58:42,647 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/nfs.dump.dir in system properties and HBase conf 2023-06-05 17:58:42,647 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/java.io.tmpdir in system properties and HBase conf 2023-06-05 17:58:42,647 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-05 17:58:42,648 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-05 17:58:42,648 INFO [Listener at localhost.localdomain/37565] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-05 17:58:42,649 WARN [Listener at localhost.localdomain/37565] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-05 17:58:42,651 WARN [Listener at localhost.localdomain/37565] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-05 17:58:42,651 WARN [Listener at localhost.localdomain/37565] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-05 17:58:42,678 WARN [Listener at localhost.localdomain/37565] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:58:42,680 INFO [Listener at localhost.localdomain/37565] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:58:42,698 INFO [Listener at localhost.localdomain/37565] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/java.io.tmpdir/Jetty_localhost_localdomain_38223_hdfs____.ght9rs/webapp 2023-06-05 17:58:42,778 INFO [Listener at localhost.localdomain/37565] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:38223 2023-06-05 17:58:42,779 WARN [Listener at localhost.localdomain/37565] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-05 17:58:42,780 WARN [Listener at localhost.localdomain/37565] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-05 17:58:42,780 WARN [Listener at localhost.localdomain/37565] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-05 17:58:42,803 WARN [Listener at localhost.localdomain/35041] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:58:42,818 WARN [Listener at localhost.localdomain/35041] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-05 17:58:42,820 WARN [Listener at localhost.localdomain/35041] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:58:42,821 INFO [Listener at localhost.localdomain/35041] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:58:42,825 INFO [Listener at localhost.localdomain/35041] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/java.io.tmpdir/Jetty_localhost_40253_datanode____6n3ry3/webapp 2023-06-05 17:58:42,894 INFO [Listener at localhost.localdomain/35041] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40253 2023-06-05 17:58:42,900 WARN [Listener at localhost.localdomain/33455] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:58:42,909 WARN [Listener at localhost.localdomain/33455] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-05 17:58:42,914 WARN [Listener at localhost.localdomain/33455] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-05 17:58:42,915 INFO [Listener at localhost.localdomain/33455] log.Slf4jLog(67): jetty-6.1.26 2023-06-05 17:58:42,918 INFO [Listener at localhost.localdomain/33455] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/java.io.tmpdir/Jetty_localhost_40499_datanode____.61vgld/webapp 2023-06-05 17:58:42,958 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5e05cf48aede499f: Processing first storage report for DS-23a561fb-db41-45bd-9cb4-0f1f1598fd39 from datanode 240c544f-9bd4-475d-ba76-8cf367a08f21 2023-06-05 17:58:42,958 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5e05cf48aede499f: from storage DS-23a561fb-db41-45bd-9cb4-0f1f1598fd39 node DatanodeRegistration(127.0.0.1:43667, datanodeUuid=240c544f-9bd4-475d-ba76-8cf367a08f21, infoPort=41143, infoSecurePort=0, ipcPort=33455, storageInfo=lv=-57;cid=testClusterID;nsid=1106860732;c=1685987922652), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:58:42,958 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5e05cf48aede499f: Processing first storage report for DS-2583c0ac-5708-4b0d-bedf-73d368ed1c2a from datanode 240c544f-9bd4-475d-ba76-8cf367a08f21 2023-06-05 17:58:42,958 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5e05cf48aede499f: from storage DS-2583c0ac-5708-4b0d-bedf-73d368ed1c2a node DatanodeRegistration(127.0.0.1:43667, datanodeUuid=240c544f-9bd4-475d-ba76-8cf367a08f21, infoPort=41143, infoSecurePort=0, ipcPort=33455, storageInfo=lv=-57;cid=testClusterID;nsid=1106860732;c=1685987922652), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:58:43,000 INFO [Listener at localhost.localdomain/33455] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40499 2023-06-05 17:58:43,006 WARN [Listener at localhost.localdomain/35095] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-05 17:58:43,078 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9e58d2184889431f: Processing first storage report for DS-a06f2fb3-9091-45a4-8a0f-2d7de55d5950 from datanode 0bb19d44-7298-44db-835c-40c5790a50cd 2023-06-05 17:58:43,078 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9e58d2184889431f: from storage DS-a06f2fb3-9091-45a4-8a0f-2d7de55d5950 node DatanodeRegistration(127.0.0.1:37661, datanodeUuid=0bb19d44-7298-44db-835c-40c5790a50cd, infoPort=33007, infoSecurePort=0, ipcPort=35095, storageInfo=lv=-57;cid=testClusterID;nsid=1106860732;c=1685987922652), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-05 17:58:43,078 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9e58d2184889431f: Processing first storage report for DS-e0e2c607-e68d-4f4c-aa0b-4521adaa6b0e from datanode 0bb19d44-7298-44db-835c-40c5790a50cd 2023-06-05 17:58:43,078 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9e58d2184889431f: from storage DS-e0e2c607-e68d-4f4c-aa0b-4521adaa6b0e node DatanodeRegistration(127.0.0.1:37661, datanodeUuid=0bb19d44-7298-44db-835c-40c5790a50cd, infoPort=33007, infoSecurePort=0, ipcPort=35095, storageInfo=lv=-57;cid=testClusterID;nsid=1106860732;c=1685987922652), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-05 17:58:43,113 DEBUG [Listener at localhost.localdomain/35095] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852 2023-06-05 17:58:43,116 INFO [Listener at localhost.localdomain/35095] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/cluster_bb7d33f9-3d4a-6e5f-8585-c7802830a80d/zookeeper_0, clientPort=58631, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/cluster_bb7d33f9-3d4a-6e5f-8585-c7802830a80d/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/cluster_bb7d33f9-3d4a-6e5f-8585-c7802830a80d/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-05 17:58:43,118 INFO [Listener at localhost.localdomain/35095] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=58631 2023-06-05 17:58:43,118 INFO [Listener at localhost.localdomain/35095] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:58:43,119 INFO [Listener at localhost.localdomain/35095] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:58:43,139 INFO [Listener at localhost.localdomain/35095] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27 with version=8 2023-06-05 17:58:43,139 INFO [Listener at localhost.localdomain/35095] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:41259/user/jenkins/test-data/533345c4-9ff7-5c93-4e17-b6afc73e18b6/hbase-staging 2023-06-05 17:58:43,141 INFO [Listener at localhost.localdomain/35095] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-06-05 17:58:43,141 INFO [Listener at localhost.localdomain/35095] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:58:43,141 INFO [Listener at localhost.localdomain/35095] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-05 17:58:43,142 INFO [Listener at localhost.localdomain/35095] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-05 17:58:43,142 INFO [Listener at localhost.localdomain/35095] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:58:43,142 INFO [Listener at localhost.localdomain/35095] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-05 17:58:43,142 INFO [Listener at localhost.localdomain/35095] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-05 17:58:43,144 INFO [Listener at localhost.localdomain/35095] ipc.NettyRpcServer(120): Bind to /148.251.75.209:36283 2023-06-05 17:58:43,145 INFO [Listener at localhost.localdomain/35095] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:58:43,146 INFO [Listener at localhost.localdomain/35095] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:58:43,147 INFO [Listener at localhost.localdomain/35095] zookeeper.RecoverableZooKeeper(93): Process identifier=master:36283 connecting to ZooKeeper ensemble=127.0.0.1:58631 2023-06-05 17:58:43,152 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:362830x0, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-05 17:58:43,154 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:36283-0x101bc6c2cd70000 connected 2023-06-05 17:58:43,165 DEBUG [Listener at localhost.localdomain/35095] zookeeper.ZKUtil(164): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-05 17:58:43,166 DEBUG [Listener at localhost.localdomain/35095] zookeeper.ZKUtil(164): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:58:43,166 DEBUG [Listener at localhost.localdomain/35095] zookeeper.ZKUtil(164): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-05 17:58:43,170 DEBUG [Listener at localhost.localdomain/35095] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36283 2023-06-05 17:58:43,170 DEBUG [Listener at localhost.localdomain/35095] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36283 2023-06-05 17:58:43,170 DEBUG [Listener at localhost.localdomain/35095] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36283 2023-06-05 17:58:43,174 DEBUG [Listener at localhost.localdomain/35095] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36283 2023-06-05 17:58:43,174 DEBUG [Listener at localhost.localdomain/35095] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36283 2023-06-05 17:58:43,174 INFO [Listener at localhost.localdomain/35095] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27, hbase.cluster.distributed=false 2023-06-05 17:58:43,184 INFO [Listener at localhost.localdomain/35095] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-06-05 17:58:43,185 INFO [Listener at localhost.localdomain/35095] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:58:43,185 INFO [Listener at localhost.localdomain/35095] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-05 17:58:43,185 INFO [Listener at localhost.localdomain/35095] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-05 17:58:43,185 INFO [Listener at localhost.localdomain/35095] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-05 17:58:43,185 INFO [Listener at localhost.localdomain/35095] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-05 17:58:43,185 INFO [Listener at localhost.localdomain/35095] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-05 17:58:43,187 INFO [Listener at localhost.localdomain/35095] ipc.NettyRpcServer(120): Bind to /148.251.75.209:42835 2023-06-05 17:58:43,187 INFO [Listener at localhost.localdomain/35095] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-05 17:58:43,188 DEBUG [Listener at localhost.localdomain/35095] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-05 17:58:43,188 INFO [Listener at localhost.localdomain/35095] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:58:43,189 INFO [Listener at localhost.localdomain/35095] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:58:43,190 INFO [Listener at localhost.localdomain/35095] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42835 connecting to ZooKeeper ensemble=127.0.0.1:58631 2023-06-05 17:58:43,194 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): regionserver:428350x0, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-05 17:58:43,196 DEBUG [Listener at localhost.localdomain/35095] zookeeper.ZKUtil(164): regionserver:428350x0, quorum=127.0.0.1:58631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-05 17:58:43,196 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42835-0x101bc6c2cd70001 connected 2023-06-05 17:58:43,197 DEBUG [Listener at localhost.localdomain/35095] zookeeper.ZKUtil(164): regionserver:42835-0x101bc6c2cd70001, quorum=127.0.0.1:58631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:58:43,197 DEBUG [Listener at localhost.localdomain/35095] zookeeper.ZKUtil(164): regionserver:42835-0x101bc6c2cd70001, quorum=127.0.0.1:58631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-05 17:58:43,198 DEBUG [Listener at localhost.localdomain/35095] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42835 2023-06-05 17:58:43,199 DEBUG [Listener at localhost.localdomain/35095] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42835 2023-06-05 17:58:43,202 DEBUG [Listener at localhost.localdomain/35095] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42835 2023-06-05 17:58:43,202 DEBUG [Listener at localhost.localdomain/35095] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42835 2023-06-05 17:58:43,204 DEBUG [Listener at localhost.localdomain/35095] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42835 2023-06-05 17:58:43,205 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,36283,1685987923141 2023-06-05 17:58:43,220 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-05 17:58:43,220 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,36283,1685987923141 2023-06-05 17:58:43,226 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-05 17:58:43,226 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): regionserver:42835-0x101bc6c2cd70001, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-05 17:58:43,226 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:58:43,227 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-05 17:58:43,228 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,36283,1685987923141 from backup master directory 2023-06-05 17:58:43,228 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-05 17:58:43,229 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,36283,1685987923141 2023-06-05 17:58:43,229 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-05 17:58:43,229 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-05 17:58:43,229 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,36283,1685987923141 2023-06-05 17:58:43,245 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/hbase.id with ID: 68ce913a-f237-4d95-8798-f0d1aa8787aa 2023-06-05 17:58:43,255 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:58:43,257 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:58:43,267 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x6933b787 to 127.0.0.1:58631 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-05 17:58:43,271 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1040e3f6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-05 17:58:43,271 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-05 17:58:43,272 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-05 17:58:43,272 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-05 17:58:43,274 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/MasterData/data/master/store-tmp 2023-06-05 17:58:43,284 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:58:43,284 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-05 17:58:43,284 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:58:43,284 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:58:43,284 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-05 17:58:43,284 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:58:43,284 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:58:43,284 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-05 17:58:43,285 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/MasterData/WALs/jenkins-hbase20.apache.org,36283,1685987923141 2023-06-05 17:58:43,287 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C36283%2C1685987923141, suffix=, logDir=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/MasterData/WALs/jenkins-hbase20.apache.org,36283,1685987923141, archiveDir=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/MasterData/oldWALs, maxLogs=10 2023-06-05 17:58:43,293 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/MasterData/WALs/jenkins-hbase20.apache.org,36283,1685987923141/jenkins-hbase20.apache.org%2C36283%2C1685987923141.1685987923288 2023-06-05 17:58:43,293 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37661,DS-a06f2fb3-9091-45a4-8a0f-2d7de55d5950,DISK], DatanodeInfoWithStorage[127.0.0.1:43667,DS-23a561fb-db41-45bd-9cb4-0f1f1598fd39,DISK]] 2023-06-05 17:58:43,293 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-05 17:58:43,293 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:58:43,293 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:58:43,293 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:58:43,295 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:58:43,296 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-05 17:58:43,296 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-05 17:58:43,297 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:58:43,297 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:58:43,298 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:58:43,300 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-05 17:58:43,302 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-05 17:58:43,303 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=864857, jitterRate=0.09972372651100159}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-05 17:58:43,303 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-05 17:58:43,303 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-05 17:58:43,304 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-05 17:58:43,304 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-05 17:58:43,304 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-05 17:58:43,305 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-05 17:58:43,305 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-05 17:58:43,305 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-05 17:58:43,306 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-05 17:58:43,307 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-05 17:58:43,317 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-05 17:58:43,317 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-05 17:58:43,318 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-05 17:58:43,318 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-05 17:58:43,318 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-05 17:58:43,320 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:58:43,320 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-05 17:58:43,320 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-05 17:58:43,321 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-05 17:58:43,322 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): regionserver:42835-0x101bc6c2cd70001, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-05 17:58:43,322 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-05 17:58:43,322 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,36283,1685987923141, sessionid=0x101bc6c2cd70000, setting cluster-up flag (Was=false) 2023-06-05 17:58:43,326 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:58:43,328 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-05 17:58:43,329 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,36283,1685987923141 2023-06-05 17:58:43,331 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:58:43,334 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-05 17:58:43,334 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,36283,1685987923141 2023-06-05 17:58:43,335 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/.hbase-snapshot/.tmp 2023-06-05 17:58:43,340 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-05 17:58:43,341 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-05 17:58:43,341 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-05 17:58:43,341 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-05 17:58:43,341 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-05 17:58:43,341 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-06-05 17:58:43,341 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:58:43,341 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-05 17:58:43,341 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:58:43,343 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685987953343 2023-06-05 17:58:43,343 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-05 17:58:43,343 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-05 17:58:43,343 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-05 17:58:43,343 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-05 17:58:43,343 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-05 17:58:43,343 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-05 17:58:43,343 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-05 17:58:43,344 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-05 17:58:43,344 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-05 17:58:43,344 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-05 17:58:43,344 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-05 17:58:43,344 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-05 17:58:43,345 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-05 17:58:43,346 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-05 17:58:43,346 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-05 17:58:43,349 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685987923346,5,FailOnTimeoutGroup] 2023-06-05 17:58:43,349 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685987923349,5,FailOnTimeoutGroup] 2023-06-05 17:58:43,349 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-05 17:58:43,349 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-05 17:58:43,349 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-05 17:58:43,349 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-05 17:58:43,359 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-05 17:58:43,360 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-05 17:58:43,360 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27 2023-06-05 17:58:43,367 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:58:43,370 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-05 17:58:43,371 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/1588230740/info 2023-06-05 17:58:43,372 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-05 17:58:43,372 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:58:43,372 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-05 17:58:43,373 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/1588230740/rep_barrier 2023-06-05 17:58:43,373 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-05 17:58:43,374 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:58:43,374 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-05 17:58:43,375 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/1588230740/table 2023-06-05 17:58:43,375 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-05 17:58:43,375 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:58:43,376 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/1588230740 2023-06-05 17:58:43,376 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/1588230740 2023-06-05 17:58:43,378 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-05 17:58:43,379 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-05 17:58:43,380 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-05 17:58:43,381 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=855986, jitterRate=0.08844348788261414}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-05 17:58:43,381 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-05 17:58:43,381 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-05 17:58:43,381 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-05 17:58:43,381 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-05 17:58:43,381 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-05 17:58:43,381 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-05 17:58:43,381 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-05 17:58:43,381 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-05 17:58:43,382 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-05 17:58:43,382 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-05 17:58:43,382 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-05 17:58:43,384 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-05 17:58:43,385 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-05 17:58:43,406 INFO [RS:0;jenkins-hbase20:42835] regionserver.HRegionServer(951): ClusterId : 68ce913a-f237-4d95-8798-f0d1aa8787aa 2023-06-05 17:58:43,407 DEBUG [RS:0;jenkins-hbase20:42835] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-05 17:58:43,410 DEBUG [RS:0;jenkins-hbase20:42835] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-05 17:58:43,410 DEBUG [RS:0;jenkins-hbase20:42835] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-05 17:58:43,412 DEBUG [RS:0;jenkins-hbase20:42835] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-05 17:58:43,413 DEBUG [RS:0;jenkins-hbase20:42835] zookeeper.ReadOnlyZKClient(139): Connect 0x789043ad to 127.0.0.1:58631 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-05 17:58:43,417 DEBUG [RS:0;jenkins-hbase20:42835] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7657b2cf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-05 17:58:43,417 DEBUG [RS:0;jenkins-hbase20:42835] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@726828dd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-05 17:58:43,429 DEBUG [RS:0;jenkins-hbase20:42835] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:42835 2023-06-05 17:58:43,429 INFO [RS:0;jenkins-hbase20:42835] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-05 17:58:43,429 INFO [RS:0;jenkins-hbase20:42835] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-05 17:58:43,429 DEBUG [RS:0;jenkins-hbase20:42835] regionserver.HRegionServer(1022): About to register with Master. 2023-06-05 17:58:43,429 INFO [RS:0;jenkins-hbase20:42835] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,36283,1685987923141 with isa=jenkins-hbase20.apache.org/148.251.75.209:42835, startcode=1685987923184 2023-06-05 17:58:43,430 DEBUG [RS:0;jenkins-hbase20:42835] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-05 17:58:43,433 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:47593, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-06-05 17:58:43,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36283] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,42835,1685987923184 2023-06-05 17:58:43,434 DEBUG [RS:0;jenkins-hbase20:42835] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27 2023-06-05 17:58:43,435 DEBUG [RS:0;jenkins-hbase20:42835] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:35041 2023-06-05 17:58:43,435 DEBUG [RS:0;jenkins-hbase20:42835] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-05 17:58:43,436 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-05 17:58:43,436 DEBUG [RS:0;jenkins-hbase20:42835] zookeeper.ZKUtil(162): regionserver:42835-0x101bc6c2cd70001, quorum=127.0.0.1:58631, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,42835,1685987923184 2023-06-05 17:58:43,436 WARN [RS:0;jenkins-hbase20:42835] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-05 17:58:43,436 INFO [RS:0;jenkins-hbase20:42835] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-05 17:58:43,437 DEBUG [RS:0;jenkins-hbase20:42835] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/WALs/jenkins-hbase20.apache.org,42835,1685987923184 2023-06-05 17:58:43,438 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,42835,1685987923184] 2023-06-05 17:58:43,441 DEBUG [RS:0;jenkins-hbase20:42835] zookeeper.ZKUtil(162): regionserver:42835-0x101bc6c2cd70001, quorum=127.0.0.1:58631, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,42835,1685987923184 2023-06-05 17:58:43,441 DEBUG [RS:0;jenkins-hbase20:42835] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-05 17:58:43,441 INFO [RS:0;jenkins-hbase20:42835] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-05 17:58:43,442 INFO [RS:0;jenkins-hbase20:42835] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-05 17:58:43,443 INFO [RS:0;jenkins-hbase20:42835] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-05 17:58:43,443 INFO [RS:0;jenkins-hbase20:42835] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-05 17:58:43,443 INFO [RS:0;jenkins-hbase20:42835] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-05 17:58:43,444 INFO [RS:0;jenkins-hbase20:42835] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-05 17:58:43,445 DEBUG [RS:0;jenkins-hbase20:42835] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:58:43,445 DEBUG [RS:0;jenkins-hbase20:42835] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:58:43,445 DEBUG [RS:0;jenkins-hbase20:42835] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:58:43,445 DEBUG [RS:0;jenkins-hbase20:42835] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:58:43,445 DEBUG [RS:0;jenkins-hbase20:42835] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:58:43,445 DEBUG [RS:0;jenkins-hbase20:42835] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-05 17:58:43,445 DEBUG [RS:0;jenkins-hbase20:42835] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:58:43,445 DEBUG [RS:0;jenkins-hbase20:42835] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:58:43,445 DEBUG [RS:0;jenkins-hbase20:42835] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:58:43,445 DEBUG [RS:0;jenkins-hbase20:42835] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-05 17:58:43,446 INFO [RS:0;jenkins-hbase20:42835] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-05 17:58:43,446 INFO [RS:0;jenkins-hbase20:42835] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-05 17:58:43,446 INFO [RS:0;jenkins-hbase20:42835] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-05 17:58:43,456 INFO [RS:0;jenkins-hbase20:42835] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-05 17:58:43,456 INFO [RS:0;jenkins-hbase20:42835] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,42835,1685987923184-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-05 17:58:43,464 INFO [RS:0;jenkins-hbase20:42835] regionserver.Replication(203): jenkins-hbase20.apache.org,42835,1685987923184 started 2023-06-05 17:58:43,464 INFO [RS:0;jenkins-hbase20:42835] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,42835,1685987923184, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:42835, sessionid=0x101bc6c2cd70001 2023-06-05 17:58:43,465 DEBUG [RS:0;jenkins-hbase20:42835] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-05 17:58:43,465 DEBUG [RS:0;jenkins-hbase20:42835] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,42835,1685987923184 2023-06-05 17:58:43,465 DEBUG [RS:0;jenkins-hbase20:42835] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,42835,1685987923184' 2023-06-05 17:58:43,466 DEBUG [RS:0;jenkins-hbase20:42835] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-05 17:58:43,466 DEBUG [RS:0;jenkins-hbase20:42835] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-05 17:58:43,466 DEBUG [RS:0;jenkins-hbase20:42835] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-05 17:58:43,467 DEBUG [RS:0;jenkins-hbase20:42835] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-05 17:58:43,467 DEBUG [RS:0;jenkins-hbase20:42835] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,42835,1685987923184 2023-06-05 17:58:43,467 DEBUG [RS:0;jenkins-hbase20:42835] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,42835,1685987923184' 2023-06-05 17:58:43,467 DEBUG [RS:0;jenkins-hbase20:42835] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-05 17:58:43,467 DEBUG [RS:0;jenkins-hbase20:42835] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-05 17:58:43,467 DEBUG [RS:0;jenkins-hbase20:42835] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-05 17:58:43,467 INFO [RS:0;jenkins-hbase20:42835] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-05 17:58:43,467 INFO [RS:0;jenkins-hbase20:42835] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-05 17:58:43,535 DEBUG [jenkins-hbase20:36283] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-05 17:58:43,536 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,42835,1685987923184, state=OPENING 2023-06-05 17:58:43,537 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-05 17:58:43,538 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:58:43,539 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,42835,1685987923184}] 2023-06-05 17:58:43,539 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-05 17:58:43,569 INFO [RS:0;jenkins-hbase20:42835] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C42835%2C1685987923184, suffix=, logDir=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/WALs/jenkins-hbase20.apache.org,42835,1685987923184, archiveDir=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/oldWALs, maxLogs=32 2023-06-05 17:58:43,578 INFO [RS:0;jenkins-hbase20:42835] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/WALs/jenkins-hbase20.apache.org,42835,1685987923184/jenkins-hbase20.apache.org%2C42835%2C1685987923184.1685987923570 2023-06-05 17:58:43,578 DEBUG [RS:0;jenkins-hbase20:42835] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37661,DS-a06f2fb3-9091-45a4-8a0f-2d7de55d5950,DISK], DatanodeInfoWithStorage[127.0.0.1:43667,DS-23a561fb-db41-45bd-9cb4-0f1f1598fd39,DISK]] 2023-06-05 17:58:43,695 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,42835,1685987923184 2023-06-05 17:58:43,696 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-05 17:58:43,702 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:35220, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-05 17:58:43,709 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-05 17:58:43,709 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-05 17:58:43,711 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C42835%2C1685987923184.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/WALs/jenkins-hbase20.apache.org,42835,1685987923184, archiveDir=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/oldWALs, maxLogs=32 2023-06-05 17:58:43,718 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/WALs/jenkins-hbase20.apache.org,42835,1685987923184/jenkins-hbase20.apache.org%2C42835%2C1685987923184.meta.1685987923711.meta 2023-06-05 17:58:43,718 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37661,DS-a06f2fb3-9091-45a4-8a0f-2d7de55d5950,DISK], DatanodeInfoWithStorage[127.0.0.1:43667,DS-23a561fb-db41-45bd-9cb4-0f1f1598fd39,DISK]] 2023-06-05 17:58:43,718 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-05 17:58:43,718 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-05 17:58:43,718 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-05 17:58:43,718 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-05 17:58:43,718 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-05 17:58:43,718 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:58:43,718 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-05 17:58:43,718 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-05 17:58:43,720 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-05 17:58:43,720 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/1588230740/info 2023-06-05 17:58:43,720 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/1588230740/info 2023-06-05 17:58:43,721 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-05 17:58:43,721 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:58:43,721 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-05 17:58:43,722 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/1588230740/rep_barrier 2023-06-05 17:58:43,722 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/1588230740/rep_barrier 2023-06-05 17:58:43,722 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-05 17:58:43,723 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:58:43,723 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-05 17:58:43,723 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/1588230740/table 2023-06-05 17:58:43,723 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/1588230740/table 2023-06-05 17:58:43,724 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-05 17:58:43,724 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:58:43,725 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/1588230740 2023-06-05 17:58:43,725 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/1588230740 2023-06-05 17:58:43,727 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-05 17:58:43,728 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-05 17:58:43,729 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=777079, jitterRate=-0.011893615126609802}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-05 17:58:43,729 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-05 17:58:43,730 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685987923695 2023-06-05 17:58:43,734 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-05 17:58:43,735 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-05 17:58:43,735 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,42835,1685987923184, state=OPEN 2023-06-05 17:58:43,736 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-05 17:58:43,737 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-05 17:58:43,739 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-05 17:58:43,739 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,42835,1685987923184 in 197 msec 2023-06-05 17:58:43,741 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-05 17:58:43,741 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 357 msec 2023-06-05 17:58:43,743 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 404 msec 2023-06-05 17:58:43,744 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685987923743, completionTime=-1 2023-06-05 17:58:43,744 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-05 17:58:43,744 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-05 17:58:43,747 DEBUG [hconnection-0x6ec6147d-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-05 17:58:43,750 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:35228, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-05 17:58:43,752 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-05 17:58:43,752 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685987983752 2023-06-05 17:58:43,752 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685988043752 2023-06-05 17:58:43,752 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 8 msec 2023-06-05 17:58:43,759 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36283,1685987923141-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-05 17:58:43,760 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36283,1685987923141-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-05 17:58:43,760 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36283,1685987923141-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-05 17:58:43,760 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:36283, period=300000, unit=MILLISECONDS is enabled. 2023-06-05 17:58:43,760 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-05 17:58:43,760 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-05 17:58:43,760 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-05 17:58:43,761 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-05 17:58:43,761 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-05 17:58:43,763 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-05 17:58:43,764 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-05 17:58:43,766 DEBUG [HFileArchiver-11] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/.tmp/data/hbase/namespace/bf10273ce006a9190224f3668e6eefac 2023-06-05 17:58:43,766 DEBUG [HFileArchiver-11] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/.tmp/data/hbase/namespace/bf10273ce006a9190224f3668e6eefac empty. 2023-06-05 17:58:43,767 DEBUG [HFileArchiver-11] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/.tmp/data/hbase/namespace/bf10273ce006a9190224f3668e6eefac 2023-06-05 17:58:43,767 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-05 17:58:43,777 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-05 17:58:43,778 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => bf10273ce006a9190224f3668e6eefac, NAME => 'hbase:namespace,,1685987923760.bf10273ce006a9190224f3668e6eefac.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/.tmp 2023-06-05 17:58:43,786 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-05 17:58:43,786 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685987923760.bf10273ce006a9190224f3668e6eefac.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:58:43,786 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing bf10273ce006a9190224f3668e6eefac, disabling compactions & flushes 2023-06-05 17:58:43,786 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685987923760.bf10273ce006a9190224f3668e6eefac. 2023-06-05 17:58:43,786 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685987923760.bf10273ce006a9190224f3668e6eefac. 2023-06-05 17:58:43,786 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685987923760.bf10273ce006a9190224f3668e6eefac. after waiting 0 ms 2023-06-05 17:58:43,786 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685987923760.bf10273ce006a9190224f3668e6eefac. 2023-06-05 17:58:43,786 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685987923760.bf10273ce006a9190224f3668e6eefac. 2023-06-05 17:58:43,786 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for bf10273ce006a9190224f3668e6eefac: 2023-06-05 17:58:43,788 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-05 17:58:43,789 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685987923760.bf10273ce006a9190224f3668e6eefac.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685987923789"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685987923789"}]},"ts":"1685987923789"} 2023-06-05 17:58:43,791 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-05 17:58:43,792 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-05 17:58:43,792 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685987923792"}]},"ts":"1685987923792"} 2023-06-05 17:58:43,793 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-05 17:58:43,798 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=bf10273ce006a9190224f3668e6eefac, ASSIGN}] 2023-06-05 17:58:43,801 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=bf10273ce006a9190224f3668e6eefac, ASSIGN 2023-06-05 17:58:43,802 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=bf10273ce006a9190224f3668e6eefac, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,42835,1685987923184; forceNewPlan=false, retain=false 2023-06-05 17:58:43,953 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=bf10273ce006a9190224f3668e6eefac, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,42835,1685987923184 2023-06-05 17:58:43,954 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685987923760.bf10273ce006a9190224f3668e6eefac.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685987923953"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685987923953"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685987923953"}]},"ts":"1685987923953"} 2023-06-05 17:58:43,956 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure bf10273ce006a9190224f3668e6eefac, server=jenkins-hbase20.apache.org,42835,1685987923184}] 2023-06-05 17:58:44,121 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685987923760.bf10273ce006a9190224f3668e6eefac. 2023-06-05 17:58:44,121 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bf10273ce006a9190224f3668e6eefac, NAME => 'hbase:namespace,,1685987923760.bf10273ce006a9190224f3668e6eefac.', STARTKEY => '', ENDKEY => ''} 2023-06-05 17:58:44,121 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace bf10273ce006a9190224f3668e6eefac 2023-06-05 17:58:44,122 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685987923760.bf10273ce006a9190224f3668e6eefac.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-05 17:58:44,122 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for bf10273ce006a9190224f3668e6eefac 2023-06-05 17:58:44,122 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for bf10273ce006a9190224f3668e6eefac 2023-06-05 17:58:44,123 INFO [StoreOpener-bf10273ce006a9190224f3668e6eefac-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region bf10273ce006a9190224f3668e6eefac 2023-06-05 17:58:44,124 DEBUG [StoreOpener-bf10273ce006a9190224f3668e6eefac-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/namespace/bf10273ce006a9190224f3668e6eefac/info 2023-06-05 17:58:44,124 DEBUG [StoreOpener-bf10273ce006a9190224f3668e6eefac-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/namespace/bf10273ce006a9190224f3668e6eefac/info 2023-06-05 17:58:44,125 INFO [StoreOpener-bf10273ce006a9190224f3668e6eefac-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bf10273ce006a9190224f3668e6eefac columnFamilyName info 2023-06-05 17:58:44,125 INFO [StoreOpener-bf10273ce006a9190224f3668e6eefac-1] regionserver.HStore(310): Store=bf10273ce006a9190224f3668e6eefac/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-05 17:58:44,126 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/namespace/bf10273ce006a9190224f3668e6eefac 2023-06-05 17:58:44,126 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/namespace/bf10273ce006a9190224f3668e6eefac 2023-06-05 17:58:44,129 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for bf10273ce006a9190224f3668e6eefac 2023-06-05 17:58:44,131 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/namespace/bf10273ce006a9190224f3668e6eefac/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-05 17:58:44,131 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened bf10273ce006a9190224f3668e6eefac; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=845564, jitterRate=0.07519127428531647}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-05 17:58:44,131 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for bf10273ce006a9190224f3668e6eefac: 2023-06-05 17:58:44,133 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685987923760.bf10273ce006a9190224f3668e6eefac., pid=6, masterSystemTime=1685987924113 2023-06-05 17:58:44,136 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685987923760.bf10273ce006a9190224f3668e6eefac. 2023-06-05 17:58:44,136 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685987923760.bf10273ce006a9190224f3668e6eefac. 2023-06-05 17:58:44,136 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=bf10273ce006a9190224f3668e6eefac, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,42835,1685987923184 2023-06-05 17:58:44,136 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685987923760.bf10273ce006a9190224f3668e6eefac.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685987924136"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685987924136"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685987924136"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685987924136"}]},"ts":"1685987924136"} 2023-06-05 17:58:44,140 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-05 17:58:44,140 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure bf10273ce006a9190224f3668e6eefac, server=jenkins-hbase20.apache.org,42835,1685987923184 in 182 msec 2023-06-05 17:58:44,142 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-05 17:58:44,142 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=bf10273ce006a9190224f3668e6eefac, ASSIGN in 343 msec 2023-06-05 17:58:44,142 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-05 17:58:44,143 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685987924142"}]},"ts":"1685987924142"} 2023-06-05 17:58:44,144 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-05 17:58:44,146 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-05 17:58:44,147 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 386 msec 2023-06-05 17:58:44,163 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-05 17:58:44,164 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-05 17:58:44,164 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:58:44,167 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-05 17:58:44,178 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-05 17:58:44,181 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-06-05 17:58:44,189 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-05 17:58:44,196 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-05 17:58:44,201 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 11 msec 2023-06-05 17:58:44,213 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-05 17:58:44,215 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-05 17:58:44,215 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.986sec 2023-06-05 17:58:44,215 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-05 17:58:44,215 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-05 17:58:44,216 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-05 17:58:44,216 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36283,1685987923141-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-05 17:58:44,216 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36283,1685987923141-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-05 17:58:44,217 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-05 17:58:44,306 DEBUG [Listener at localhost.localdomain/35095] zookeeper.ReadOnlyZKClient(139): Connect 0x49aa34e9 to 127.0.0.1:58631 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-05 17:58:44,310 DEBUG [Listener at localhost.localdomain/35095] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5893b76e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-05 17:58:44,311 DEBUG [hconnection-0x96fb50d-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-05 17:58:44,313 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:35242, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-05 17:58:44,314 INFO [Listener at localhost.localdomain/35095] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,36283,1685987923141 2023-06-05 17:58:44,314 INFO [Listener at localhost.localdomain/35095] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-05 17:58:44,318 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-05 17:58:44,318 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:58:44,319 INFO [Listener at localhost.localdomain/35095] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-05 17:58:44,319 INFO [Listener at localhost.localdomain/35095] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-05 17:58:44,321 INFO [Listener at localhost.localdomain/35095] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=test.com%2C8080%2C1, suffix=, logDir=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/WALs/test.com,8080,1, archiveDir=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/oldWALs, maxLogs=32 2023-06-05 17:58:44,326 INFO [Listener at localhost.localdomain/35095] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/WALs/test.com,8080,1/test.com%2C8080%2C1.1685987924321 2023-06-05 17:58:44,327 DEBUG [Listener at localhost.localdomain/35095] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37661,DS-a06f2fb3-9091-45a4-8a0f-2d7de55d5950,DISK], DatanodeInfoWithStorage[127.0.0.1:43667,DS-23a561fb-db41-45bd-9cb4-0f1f1598fd39,DISK]] 2023-06-05 17:58:44,334 INFO [Listener at localhost.localdomain/35095] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/WALs/test.com,8080,1/test.com%2C8080%2C1.1685987924321 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/WALs/test.com,8080,1/test.com%2C8080%2C1.1685987924327 2023-06-05 17:58:44,334 DEBUG [Listener at localhost.localdomain/35095] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37661,DS-a06f2fb3-9091-45a4-8a0f-2d7de55d5950,DISK], DatanodeInfoWithStorage[127.0.0.1:43667,DS-23a561fb-db41-45bd-9cb4-0f1f1598fd39,DISK]] 2023-06-05 17:58:44,334 DEBUG [Listener at localhost.localdomain/35095] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/WALs/test.com,8080,1/test.com%2C8080%2C1.1685987924321 is not closed yet, will try archiving it next time 2023-06-05 17:58:44,335 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/WALs/test.com,8080,1 2023-06-05 17:58:44,345 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/WALs/test.com,8080,1/test.com%2C8080%2C1.1685987924321 to hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/oldWALs/test.com%2C8080%2C1.1685987924321 2023-06-05 17:58:44,350 DEBUG [Listener at localhost.localdomain/35095] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/oldWALs 2023-06-05 17:58:44,350 INFO [Listener at localhost.localdomain/35095] wal.AbstractFSWAL(1031): Closed WAL: FSHLog test.com%2C8080%2C1:(num 1685987924327) 2023-06-05 17:58:44,350 INFO [Listener at localhost.localdomain/35095] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-05 17:58:44,350 DEBUG [Listener at localhost.localdomain/35095] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x49aa34e9 to 127.0.0.1:58631 2023-06-05 17:58:44,350 DEBUG [Listener at localhost.localdomain/35095] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:58:44,351 DEBUG [Listener at localhost.localdomain/35095] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-05 17:58:44,351 DEBUG [Listener at localhost.localdomain/35095] util.JVMClusterUtil(257): Found active master hash=423873003, stopped=false 2023-06-05 17:58:44,351 INFO [Listener at localhost.localdomain/35095] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,36283,1685987923141 2023-06-05 17:58:44,352 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): regionserver:42835-0x101bc6c2cd70001, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-05 17:58:44,352 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-05 17:58:44,352 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:58:44,352 INFO [Listener at localhost.localdomain/35095] procedure2.ProcedureExecutor(629): Stopping 2023-06-05 17:58:44,353 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:58:44,353 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42835-0x101bc6c2cd70001, quorum=127.0.0.1:58631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-05 17:58:44,353 DEBUG [Listener at localhost.localdomain/35095] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6933b787 to 127.0.0.1:58631 2023-06-05 17:58:44,353 DEBUG [Listener at localhost.localdomain/35095] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:58:44,354 INFO [Listener at localhost.localdomain/35095] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,42835,1685987923184' ***** 2023-06-05 17:58:44,354 INFO [Listener at localhost.localdomain/35095] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-05 17:58:44,354 INFO [RS:0;jenkins-hbase20:42835] regionserver.HeapMemoryManager(220): Stopping 2023-06-05 17:58:44,354 INFO [RS:0;jenkins-hbase20:42835] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-05 17:58:44,354 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-05 17:58:44,354 INFO [RS:0;jenkins-hbase20:42835] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-05 17:58:44,354 INFO [RS:0;jenkins-hbase20:42835] regionserver.HRegionServer(3303): Received CLOSE for bf10273ce006a9190224f3668e6eefac 2023-06-05 17:58:44,354 INFO [RS:0;jenkins-hbase20:42835] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,42835,1685987923184 2023-06-05 17:58:44,355 DEBUG [RS:0;jenkins-hbase20:42835] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x789043ad to 127.0.0.1:58631 2023-06-05 17:58:44,355 DEBUG [RS:0;jenkins-hbase20:42835] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:58:44,355 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing bf10273ce006a9190224f3668e6eefac, disabling compactions & flushes 2023-06-05 17:58:44,355 INFO [RS:0;jenkins-hbase20:42835] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-05 17:58:44,355 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685987923760.bf10273ce006a9190224f3668e6eefac. 2023-06-05 17:58:44,355 INFO [RS:0;jenkins-hbase20:42835] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-05 17:58:44,355 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685987923760.bf10273ce006a9190224f3668e6eefac. 2023-06-05 17:58:44,355 INFO [RS:0;jenkins-hbase20:42835] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-05 17:58:44,355 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685987923760.bf10273ce006a9190224f3668e6eefac. after waiting 0 ms 2023-06-05 17:58:44,355 INFO [RS:0;jenkins-hbase20:42835] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-05 17:58:44,355 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685987923760.bf10273ce006a9190224f3668e6eefac. 2023-06-05 17:58:44,355 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing bf10273ce006a9190224f3668e6eefac 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-05 17:58:44,355 INFO [RS:0;jenkins-hbase20:42835] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-06-05 17:58:44,355 DEBUG [RS:0;jenkins-hbase20:42835] regionserver.HRegionServer(1478): Online Regions={bf10273ce006a9190224f3668e6eefac=hbase:namespace,,1685987923760.bf10273ce006a9190224f3668e6eefac., 1588230740=hbase:meta,,1.1588230740} 2023-06-05 17:58:44,356 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-05 17:58:44,356 DEBUG [RS:0;jenkins-hbase20:42835] regionserver.HRegionServer(1504): Waiting on 1588230740, bf10273ce006a9190224f3668e6eefac 2023-06-05 17:58:44,356 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-05 17:58:44,356 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-05 17:58:44,356 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-05 17:58:44,356 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-05 17:58:44,356 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=1.26 KB heapSize=2.89 KB 2023-06-05 17:58:44,371 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.17 KB at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/1588230740/.tmp/info/955a825587444a81a3108e7b13b75543 2023-06-05 17:58:44,372 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/namespace/bf10273ce006a9190224f3668e6eefac/.tmp/info/1da1a8f179924d81be58b400fe44547b 2023-06-05 17:58:44,379 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/namespace/bf10273ce006a9190224f3668e6eefac/.tmp/info/1da1a8f179924d81be58b400fe44547b as hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/namespace/bf10273ce006a9190224f3668e6eefac/info/1da1a8f179924d81be58b400fe44547b 2023-06-05 17:58:44,384 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/namespace/bf10273ce006a9190224f3668e6eefac/info/1da1a8f179924d81be58b400fe44547b, entries=2, sequenceid=6, filesize=4.8 K 2023-06-05 17:58:44,384 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=94 B at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/1588230740/.tmp/table/f3276c1668ee43bfad9fb42a69369ab2 2023-06-05 17:58:44,384 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for bf10273ce006a9190224f3668e6eefac in 29ms, sequenceid=6, compaction requested=false 2023-06-05 17:58:44,389 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/namespace/bf10273ce006a9190224f3668e6eefac/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-05 17:58:44,389 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/1588230740/.tmp/info/955a825587444a81a3108e7b13b75543 as hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/1588230740/info/955a825587444a81a3108e7b13b75543 2023-06-05 17:58:44,390 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685987923760.bf10273ce006a9190224f3668e6eefac. 2023-06-05 17:58:44,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for bf10273ce006a9190224f3668e6eefac: 2023-06-05 17:58:44,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685987923760.bf10273ce006a9190224f3668e6eefac. 2023-06-05 17:58:44,393 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/1588230740/info/955a825587444a81a3108e7b13b75543, entries=10, sequenceid=9, filesize=5.9 K 2023-06-05 17:58:44,394 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/1588230740/.tmp/table/f3276c1668ee43bfad9fb42a69369ab2 as hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/1588230740/table/f3276c1668ee43bfad9fb42a69369ab2 2023-06-05 17:58:44,398 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/1588230740/table/f3276c1668ee43bfad9fb42a69369ab2, entries=2, sequenceid=9, filesize=4.7 K 2023-06-05 17:58:44,399 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.26 KB/1292, heapSize ~2.61 KB/2672, currentSize=0 B/0 for 1588230740 in 43ms, sequenceid=9, compaction requested=false 2023-06-05 17:58:44,407 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/data/hbase/meta/1588230740/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-06-05 17:58:44,407 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-05 17:58:44,407 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-05 17:58:44,407 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-05 17:58:44,407 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-05 17:58:44,446 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-06-05 17:58:44,447 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-06-05 17:58:44,556 INFO [RS:0;jenkins-hbase20:42835] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,42835,1685987923184; all regions closed. 2023-06-05 17:58:44,557 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/WALs/jenkins-hbase20.apache.org,42835,1685987923184 2023-06-05 17:58:44,568 DEBUG [RS:0;jenkins-hbase20:42835] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/oldWALs 2023-06-05 17:58:44,568 INFO [RS:0;jenkins-hbase20:42835] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C42835%2C1685987923184.meta:.meta(num 1685987923711) 2023-06-05 17:58:44,568 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/WALs/jenkins-hbase20.apache.org,42835,1685987923184 2023-06-05 17:58:44,574 DEBUG [RS:0;jenkins-hbase20:42835] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/oldWALs 2023-06-05 17:58:44,574 INFO [RS:0;jenkins-hbase20:42835] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C42835%2C1685987923184:(num 1685987923570) 2023-06-05 17:58:44,574 DEBUG [RS:0;jenkins-hbase20:42835] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:58:44,575 INFO [RS:0;jenkins-hbase20:42835] regionserver.LeaseManager(133): Closed leases 2023-06-05 17:58:44,575 INFO [RS:0;jenkins-hbase20:42835] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-06-05 17:58:44,575 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-05 17:58:44,576 INFO [RS:0;jenkins-hbase20:42835] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:42835 2023-06-05 17:58:44,580 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): regionserver:42835-0x101bc6c2cd70001, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,42835,1685987923184 2023-06-05 17:58:44,580 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-05 17:58:44,580 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): regionserver:42835-0x101bc6c2cd70001, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-05 17:58:44,580 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,42835,1685987923184] 2023-06-05 17:58:44,580 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,42835,1685987923184; numProcessing=1 2023-06-05 17:58:44,581 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,42835,1685987923184 already deleted, retry=false 2023-06-05 17:58:44,581 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,42835,1685987923184 expired; onlineServers=0 2023-06-05 17:58:44,581 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,36283,1685987923141' ***** 2023-06-05 17:58:44,581 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-05 17:58:44,581 DEBUG [M:0;jenkins-hbase20:36283] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1d7c4e1b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-05 17:58:44,581 INFO [M:0;jenkins-hbase20:36283] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,36283,1685987923141 2023-06-05 17:58:44,582 INFO [M:0;jenkins-hbase20:36283] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,36283,1685987923141; all regions closed. 2023-06-05 17:58:44,582 DEBUG [M:0;jenkins-hbase20:36283] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-05 17:58:44,582 DEBUG [M:0;jenkins-hbase20:36283] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-05 17:58:44,582 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-05 17:58:44,582 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685987923349] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685987923349,5,FailOnTimeoutGroup] 2023-06-05 17:58:44,582 DEBUG [M:0;jenkins-hbase20:36283] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-05 17:58:44,582 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685987923346] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685987923346,5,FailOnTimeoutGroup] 2023-06-05 17:58:44,583 INFO [M:0;jenkins-hbase20:36283] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-05 17:58:44,583 INFO [M:0;jenkins-hbase20:36283] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-05 17:58:44,583 INFO [M:0;jenkins-hbase20:36283] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-06-05 17:58:44,583 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-05 17:58:44,583 DEBUG [M:0;jenkins-hbase20:36283] master.HMaster(1512): Stopping service threads 2023-06-05 17:58:44,583 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-05 17:58:44,584 INFO [M:0;jenkins-hbase20:36283] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-05 17:58:44,584 ERROR [M:0;jenkins-hbase20:36283] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-11,5,PEWorkerGroup] 2023-06-05 17:58:44,584 INFO [M:0;jenkins-hbase20:36283] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-05 17:58:44,584 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-05 17:58:44,584 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-05 17:58:44,585 DEBUG [M:0;jenkins-hbase20:36283] zookeeper.ZKUtil(398): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-05 17:58:44,585 WARN [M:0;jenkins-hbase20:36283] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-05 17:58:44,585 INFO [M:0;jenkins-hbase20:36283] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-05 17:58:44,585 INFO [M:0;jenkins-hbase20:36283] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-05 17:58:44,586 DEBUG [M:0;jenkins-hbase20:36283] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-05 17:58:44,586 INFO [M:0;jenkins-hbase20:36283] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:58:44,586 DEBUG [M:0;jenkins-hbase20:36283] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:58:44,586 DEBUG [M:0;jenkins-hbase20:36283] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-05 17:58:44,586 DEBUG [M:0;jenkins-hbase20:36283] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:58:44,586 INFO [M:0;jenkins-hbase20:36283] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=24.09 KB heapSize=29.59 KB 2023-06-05 17:58:44,599 INFO [M:0;jenkins-hbase20:36283] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.09 KB at sequenceid=66 (bloomFilter=true), to=hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/59a4f76a8a79416a969cb49b98920d1e 2023-06-05 17:58:44,605 DEBUG [M:0;jenkins-hbase20:36283] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/59a4f76a8a79416a969cb49b98920d1e as hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/59a4f76a8a79416a969cb49b98920d1e 2023-06-05 17:58:44,611 INFO [M:0;jenkins-hbase20:36283] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35041/user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/59a4f76a8a79416a969cb49b98920d1e, entries=8, sequenceid=66, filesize=6.3 K 2023-06-05 17:58:44,612 INFO [M:0;jenkins-hbase20:36283] regionserver.HRegion(2948): Finished flush of dataSize ~24.09 KB/24669, heapSize ~29.57 KB/30280, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 26ms, sequenceid=66, compaction requested=false 2023-06-05 17:58:44,613 INFO [M:0;jenkins-hbase20:36283] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-05 17:58:44,613 DEBUG [M:0;jenkins-hbase20:36283] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-05 17:58:44,613 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/78893a05-fb86-764a-4eef-e93f28b3ef27/MasterData/WALs/jenkins-hbase20.apache.org,36283,1685987923141 2023-06-05 17:58:44,616 INFO [M:0;jenkins-hbase20:36283] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-05 17:58:44,616 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-05 17:58:44,616 INFO [M:0;jenkins-hbase20:36283] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:36283 2023-06-05 17:58:44,618 DEBUG [M:0;jenkins-hbase20:36283] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,36283,1685987923141 already deleted, retry=false 2023-06-05 17:58:44,755 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:58:44,755 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): master:36283-0x101bc6c2cd70000, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:58:44,755 INFO [M:0;jenkins-hbase20:36283] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,36283,1685987923141; zookeeper connection closed. 2023-06-05 17:58:44,856 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): regionserver:42835-0x101bc6c2cd70001, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:58:44,856 INFO [RS:0;jenkins-hbase20:42835] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,42835,1685987923184; zookeeper connection closed. 2023-06-05 17:58:44,856 DEBUG [Listener at localhost.localdomain/35095-EventThread] zookeeper.ZKWatcher(600): regionserver:42835-0x101bc6c2cd70001, quorum=127.0.0.1:58631, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-05 17:58:44,858 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@738570a1] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@738570a1 2023-06-05 17:58:44,858 INFO [Listener at localhost.localdomain/35095] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-05 17:58:44,859 WARN [Listener at localhost.localdomain/35095] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-05 17:58:44,869 INFO [Listener at localhost.localdomain/35095] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-05 17:58:44,975 WARN [BP-1877493100-148.251.75.209-1685987922652 heartbeating to localhost.localdomain/127.0.0.1:35041] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-05 17:58:44,975 WARN [BP-1877493100-148.251.75.209-1685987922652 heartbeating to localhost.localdomain/127.0.0.1:35041] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1877493100-148.251.75.209-1685987922652 (Datanode Uuid 0bb19d44-7298-44db-835c-40c5790a50cd) service to localhost.localdomain/127.0.0.1:35041 2023-06-05 17:58:44,976 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/cluster_bb7d33f9-3d4a-6e5f-8585-c7802830a80d/dfs/data/data3/current/BP-1877493100-148.251.75.209-1685987922652] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:58:44,976 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/cluster_bb7d33f9-3d4a-6e5f-8585-c7802830a80d/dfs/data/data4/current/BP-1877493100-148.251.75.209-1685987922652] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:58:44,977 WARN [Listener at localhost.localdomain/35095] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-05 17:58:44,984 INFO [Listener at localhost.localdomain/35095] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-05 17:58:45,087 WARN [BP-1877493100-148.251.75.209-1685987922652 heartbeating to localhost.localdomain/127.0.0.1:35041] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-05 17:58:45,087 WARN [BP-1877493100-148.251.75.209-1685987922652 heartbeating to localhost.localdomain/127.0.0.1:35041] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1877493100-148.251.75.209-1685987922652 (Datanode Uuid 240c544f-9bd4-475d-ba76-8cf367a08f21) service to localhost.localdomain/127.0.0.1:35041 2023-06-05 17:58:45,089 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/cluster_bb7d33f9-3d4a-6e5f-8585-c7802830a80d/dfs/data/data1/current/BP-1877493100-148.251.75.209-1685987922652] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:58:45,090 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e2b712d9-786d-8de9-6ad8-f87938291852/cluster_bb7d33f9-3d4a-6e5f-8585-c7802830a80d/dfs/data/data2/current/BP-1877493100-148.251.75.209-1685987922652] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-05 17:58:45,103 INFO [Listener at localhost.localdomain/35095] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-06-05 17:58:45,220 INFO [Listener at localhost.localdomain/35095] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-05 17:58:45,231 INFO [Listener at localhost.localdomain/35095] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-05 17:58:45,240 INFO [Listener at localhost.localdomain/35095] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=131 (was 108) - Thread LEAK? -, OpenFileDescriptor=568 (was 545) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=35 (was 38), ProcessCount=167 (was 167), AvailableMemoryMB=6076 (was 6089)